Showing posts with label aitools. Show all posts
Showing posts with label aitools. Show all posts

Tuesday, 28 January 2025

超简单!白嫖本地AI工具:Ollama 和 LM Studio 教程 - Steal This Trick! Free Local AI Tools: Ollama and LM Studio Tutorial

本地 AI 自由,开启无限可能 - Local AI Freedom, Unleash Infinite Possibilities

宝子们,今天给大家分享两款超绝的本地 AI 工具,能让你实现 AI 自由,还不花一分钱!

一、Ollama:极简上手

Ollama 这个工具,用起来简单得很。先去它官网,点 Github 链接,有两种下载方式,安装包下载更方便,手动安装适合爱折腾的小伙伴。安装好后,回到首页,点击 “Explor models”,就能挑选你心仪的 AI 大模型下载啦。下载完,打开终端,就能用起来。比如你下的是 Llama 3.2,启动命令就是 “ollama run llama3.2”,启动后直接问问题,不想用了输入 “/bye” 退出,就这么简单!

二、LM Studio:图形界面超友好

LM Studio 也很不错,官网无需魔法就能访问,直接下载安装包安装。它有个图形化界面,快速教程没啥用,咱可以跳过。一开始没模型,点击左侧第四个按钮,就能搜索下载模型。要是在 “魔法地区”,去抱脸网其镜像网站下载就行。点击左侧边栏第三个按钮,还能改模型存放位置。

下载好模型,在左侧边栏点击第一个按钮加载模型,不想用了点 “Eject” 去掉。以流行的 Deepseek 模型为例,问它能不能搜索在线信息,答案是否定的,但它会给其他搜索引擎信息;问它最新知识库,是 2024 年 7 月的。试了下 “奶奶秘药” 问题,它直接给了段代码,能在本地无限生成呢。当然,不知道这些秘药是否能用。清除历史记录也一键搞定。

三、结合工作:以翻译软件为例

LM Studio 个人免费使用,还能和工作结合。在左侧边栏点击第二个按钮加载模型,右侧边栏 “API Usage” 会提供 API 名字和本地服务器 IP 地址。以翻译软件为例,打开设置页面,选 “更多翻译服务”,“Add OpenAI compatible service”,自定义服务名,复制粘贴 IP 地址,API Key 随便写,再复制粘贴模型名,点 “Verify service”,出现 “Successful” 绿标就设置成功啦,不想用了删除也方便。

这两款本地 AI 工具,没有复杂限制,用起来超自由,赶紧试试,开启你的 AI 新体验!



Hey guys, today I'm going to share two awesome local AI tools that can give you AI freedom without spending a penny!


1. Ollama: Extremely Easy to Get Started

Ollama is very simple to use. First, go to its official website and click the Github link. There are two download methods. Downloading the installation package is more convenient, while manual installation is for those who like to tinker. After installation, go back to the home - page and click "Explor models" to select and download your favorite AI large - model. After downloading, open the terminal and start using it. For example, if you download Llama 3.2, the startup command is "ollama run llama3.2". After startup, you can directly ask questions. If you don't want to use it anymore, just enter "/bye" to exit. It's that simple!


2. LM Studio: User - friendly Graphical Interface

LM Studio is also great. Its official website can be accessed without "magic". Just download the installation package and install it. It has a graphical interface, and the quick tutorial is not very useful, so we can skip it. At the beginning, there is no model. Click the fourth button on the left to search for and download models. If you are in a "magic area", you can go to Hugging Face or its mirror website to download models. Click the third button on the left - hand sidebar to change the storage location of the AI large - model.

After downloading the model, click the first button on the left - hand sidebar to load the model. If you don't want to use it anymore, click "Eject" to remove it. Taking the popular Deepseek model as an example, when we ask if it can search for online information, the answer is no, but it will provide information about other search engines. When we ask about its latest knowledge base, it is July 2024. We tried the "Grandma's Secret Medicine" question, and it directly gave a piece of code that can generate Grandma's Secret Medicine infinitely locally. Of course, I don't know if these secret medicines work. Clearing the history is also a one - click thing.


3. Combining with Work: Taking a Translation Software as an Example

LM Studio is free for personal use and can be combined with work. Click the second button on the left - hand sidebar to load the model. The "API Usage" on the right - hand sidebar will provide the API name and the local server IP address. Taking a translation software as an example, open the settings page, select "More translation services", "Add OpenAI compatible service", customize the service name, copy and paste the IP address, write any API Key, and then copy and paste the model name. Click "Verify service", and if a green "Successful" label appears, the setting is successful. If you don't want to use it anymore, it's easy to delete.

These two local AI tools have no complicated restrictions and are extremely free to use. Try them out quickly and start your new AI experience!


Sunday, 27 October 2024

ComfyUI & Stable Diffusion: The Ultimate Resource Hack - 1 Minute to Sync Models and Nodes - ComfyUI & Stable Diffusion:资源黑客终极指南 - 1分钟同步模型和节点

Stop Wasting, Start Syncing! - 停止浪费,开始同步!

Fed up with the hassle of double-dipping into model installations? Dive into our 60-second hack to sync ComfyUI with your local Stable Diffusion WebUI, sparing you the agony of redundant setups. We'll show you how to effortlessly glide through model, node, CLIP, VAE, and Lora configurations, turning your dual UI woes into a thing of the past. Plus, for path-confused AI artists, we've got the config file right here, ready for a quick tweak and roll.

Ready to unleash the power of your AI art tools without the clutter? First, hunt down that hidden gem in your ComfyUI folder—the 'extra_model_paths.yaml.example' file. Change its file name to ‘extra_model_paths.yaml’, and boom, you're in the game. Fire up your favorite text editor, or if you're feeling commando, dive into the terminal (Mac/Linux) or command prompt (Windows). Tweaking this configuration file to harness the magic of Stable Diffusion WebUI models, nodes, CLIP, VAE, and Lora, all from the cozy confines of your ComfyUI. The specific configuration file can refer to the following example.
# Rename this to extra_model_paths.yaml and ComfyUI will load it


# config for a1111 ui

a111:

    base_path: /Users/your_username/stable-diffusion-webui/  # 指向Stable Diffusion WebUI的安装路径

    checkpoints: models/Stable-diffusion  # Stable Diffusion模型的路径

    configs: models/Stable-diffusion  # Stable Diffusion配置文件的路径

    vae: models/VAE  # VAE模型路径

    loras: |

         models/Lora  # Lora模型路径

         models/LyCORIS  # LyCORIS模型路径(如果有的话)

    upscale_models: |

         models/ESRGAN  # ESRGAN模型路径

         models/RealESRGAN  # RealESRGAN模型路径

         models/SwinIR  # SwinIR模型路径

    embeddings: embeddings  # 文本嵌入路径

    hypernetworks: models/hypernetworks  # 超网络模型路径

    controlnet: models/ControlNet  # ControlNet模型路径


# config for comfyui

comfyui:

    base_path: /Users/your_username/ComfyUI/  # ComfyUI的安装路径

    checkpoints: /Users/your_username/stable-diffusion-webui/models/Stable-diffusion  # 指向A1111安装目录下的Stable Diffusion模型

    clip: /Users/your_username/stable-diffusion-webui/models/clip/  # 指向A1111的CLIP模型路径

    clip_vision: /Users/your_username/stable-diffusion-webui/models/clip_vision/  # CLIP视觉模型路径

    configs: /Users/your_username/stable-diffusion-webui/models/configs/  # Stable Diffusion的配置文件路径

    controlnet: /Users/your_username/stable-diffusion-webui/models/ControlNet/  # 指向ControlNet模型的路径

    embeddings: /Users/your_username/stable-diffusion-webui/embeddings/  # 嵌入模型路径

    loras: /Users/your_username/stable-diffusion-webui/models/Lora/  # Lora模型路径

    upscale_models: /Users/your_username/stable-diffusion-webui/models/upscale_models/  # 指向超分辨率模型路径

    vae: /Users/your_username/stable-diffusion-webui/models/VAE/  # VAE模型路径


All paths must be absolute paths to where the actual files are located on your system. - 所有路径必须是你系统上实际文件所在的绝对路径。