Showing posts with label aihacks. Show all posts
Showing posts with label aihacks. Show all posts

Thursday, 25 September 2025

AI Myths Busted! Quick Dodge 100 Model Pitfalls for Normies😱- AI神话破灭!普通人速避100模型雷区😱

Whoa, folks! An AI guru reviewed 100 models in 30 days and spilled the tea—hilarious and eye-opening 😂. Don't think AI's just for techies; it's gold for us regular Joes in learning, work, and life. Let's unpack the lessons to skip the blunders.💡


First off, AI model "moats" vanish fast! Today's top dog is tomorrow's has-been in two months. Lesson? Ditch single subscriptions; smarties use routers like Grok or OpenRouter to switch per task. Counterintuitive, right? Pricey ain't always lasting—free open-source rocks! For normies, when learning AI, grab opens like Deepseek or Qwen for cheap projects, level up instantly.😎


Next, open-source is catching up to closed giants! Gap's basically gone; privacy fans, run Ollama locally—safe and wallet-friendly. Controversy: Some argue closed Claude4 crushes complex stuff. You buying? At work, finetune small models like Llama3.2 1B for speedy coding, productivity skyrockets! In life, whip up recipes or chats without waiting—speed rules.🚀


Benchmarks? Total scam! Models cheat for scores but flop IRL. Lesson: Test yourself! Easy example: Wanna learn drawing? Use moondream tiny model for quick inspo pics—snappy, no lag, beats big boys dragging feet. Outcome? Even klutzes nail art dreams, work prezos pop, life gets funnier.🎨


Lastly, speed beats accuracy! No one waits 30 secs. Plain talk: AI's your buddy, quicker the better. Sneaky easter egg: These pint-sized models may look meh, but they're pros at one thing, packing a punch to make your solutions "climax" non-stop (hey, mind out of gutter—it's idea climaxes!~).😂


Bottom line, don't fear AI—grab these tips for smoother learning, easier work, funner life. Give it a whirl, peeps!❤️ 


哇哦,女士们,先生们!一个AI大神花30天审了100个模型,吐槽满满,让我这个文案老司机都笑喷了😂。别以为AI是高大上玩意儿,其实对咱们普通人超有启发!来,跟着我扒一扒这些教训,帮你学习、工作、生活少走弯路。💡


首先,AI模型的“王者地位”超短命!今天SOTA(最牛逼的),俩月后就过气了。教训?别死磕一个订阅,聪明人用路由平台,像Grok或OpenRouter,按任务切换模型。反常识吧?贵的不一定持久,免费开源才香!对普通人来说,学习AI时,别花冤枉钱,试试开源Deepseek或Qwen,建个小项目,秒变高手。😎


其次,开源模型快追上闭源大佬了!差距基本没了,隐私党用Ollama本地跑,安全又省。争议点:有些人说闭源Claude4在复杂推理上还是王道,你觉得呢?工作上,用小模型finetune专攻任务,比如用Llama3.2 1B快速写代码,效率翻倍!生活里,生成菜谱或聊天,别等大模型半天,速度才是王道。🚀


benchmark别信!模型们“作弊”刷分,现实中拉胯。教训:自己测!简单例子:你想学画画,用moondream小模型生成灵感图,超快不卡顿,比大模型磨叽半天强多了。结果?手残党也能轻松搞定艺术梦,工作汇报瞬间亮眼,生活多点创意乐子。🎨


最后,速度胜过精度!用户等不起30秒回复。朴实点说,AI就像你的小助手,越快越贴心。隐藏小彩蛋:这些小模型虽不起眼,但专精一技,劲头十足,能让你“高潮”迭起地解决问题(哎呀,别想歪,是创意高潮哦~)。😂


总结,普通人别怕AI,吸取这些教训,学习更高效、工作更轻松、生活更有趣。快试试吧,姐妹!❤️


Tuesday, 25 February 2025

微信AI聊天机器人超进化:DeepSeek 接入指南,小白也能轻松搞定

告别无聊尬聊,微信秒变智能助手,DeepSeek 带你飞!- Say goodbye to boring chats, WeChat turns into a smart assistant in seconds, DeepSeek takes you to fly!



要将DeepSeek接入到chatgpt-on-wechat项目中,需要遵循以下步骤:

使用DeepSeek API Key的方法

1. 获取DeepSeek API Key

首先,您需要获取DeepSeek的API密钥:

  1. 访问DeepSeek官方网站或者使用免费的OpenRouter官网调用Deepseek API. 

  2. 注册并登录您的账户

  3. 在API密钥部分创建一个新的API密钥

  4. 复制并安全保存生成的API密钥

2. 配置chatgpt-on-wechat项目

  1. 克隆chatgpt-on-wechat项目到本地:

git clone https://github.com/zhayujie/chatgpt-on-wechat.git

cd chatgpt-on-wechat

  1. 安装必要的依赖:

pip3 install -r requirements.txt

  1. 安装扩展依赖(图片,声音等)

pip3 install -r requirements-optional.txt

  1. 在项目根目录下创建或编辑配置文件 config.json,添加DeepSeek的配置,,并确保 Python 版本 >= 3.8, 具体请参考官网设置说明:

{

   "channel_type": "wx",

   "model": "deepseek-chat",

   "open_ai_api_key": "YOUR_DEEPSEEK_API_KEY",

   "proxy": "",

   "api_base_url": "https://api.deepseek.com/v1" 或者  “https://openrouter.ai/api/v1/chat/completions”

"single_chat_prefix": ["bot"],

   "group_chat_prefix": ["@bot"],

   "wechat_prefix": ["@bot"]

}

3. 运行项目

执行以下命令启动项目:

python app.py


然后用微信扫码登录,成功后你的微信账号就可以和 Deepseek API 进行对话,首次登录需在手机端确认。

使用本地Ollama运行DeepSeek的方法

1. 安装Ollama

  1. 访问 Ollama 官网并下载适合您操作系统的版本

  2. 安装 Ollama, 在官方Github上有多种安装方式

2. 下载并运行DeepSeek模型

  1. 打开终端,运行以下命令下载DeepSeek模型:
    ollama pull deepseek-coder-v2 → 你自己选需要的模型,这只是例子。

  2. 启动DeepSeek模型:

ollama run deepseek-coder-v2
默认监听 http://localhost:11434,你可以用 curl 测试:

curl http://localhost:11434/api/generate -d '{

  "model": "deepseek-coder-v2",

  "prompt": "你好!",

  "stream": false

}'

如果返回 "response": {应该是某种问候},说明 API 服务正常。

3. 配置chatgpt-on-wechat项目

  1. 克隆并安装依赖(如前面所述)

  2. 修改 config.json 文件, 具体请参考官网设置说明:

{

   "channel_type": "wx",

   "model": "deepseek-chat",

   "open_ai_api_key": "ollama",

"proxy": "",

   "api_base_url": "http://localhost:11434",

"single_chat_prefix": ["bot"],

   "group_chat_prefix": ["@bot"],

   "wechat_prefix": ["@bot"]

}

  1. 需要本地repository添加新Ollama机器人(bot/ollama_bot.py):

async def reply(self, query, context):

    response = requests.post(

        f"{self.ollama_base_url}/api/chat",

        json={

            "model": self.model,

            "messages": [{"role": "user", "content": query}]

        }

    )

return response.json()['message']['content']

4. 运行项目

执行以下命令启动项目:

OLLAMA_HOST=0.0.0.0 python3 app.py


然后用微信扫码登录,你的微信账号就可以通过本地 Ollama 调用 Deepseek 进行聊天,首次登录需在手机端确认。

请确保仔细阅读chatgpt-on-wechat项目的文档,并在必要时参考DeepSeek和Ollama的官方文档以获取更详细的信息。