<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0">
  <channel><title>blog.matterxiaomi.com</title>
<description>This site is all about blog.matterxiaomi.com.</description>
<generator>Miniblog.Core</generator>
<link>http://blog.matterxiaomi.com/</link>
<item>
  <title>Matter Bridge in Home Assistant Part2 - Install MatterBridge Connect to Home assistant</title>
  <link>http://blog.matterxiaomi.com/blog/matter-bridge-part2/</link>
  <description>&lt;p&gt;Matter Bridge in Home Assistant Part2 - Install MatterBridge Connect to Home assistant&lt;/p&gt;
&lt;div class="mce-toc"&gt;
&lt;h2&gt;Table of Contents&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jhfa2nbb8"&gt;Quick start&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jhfadikr2"&gt;Install and configure&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jhfa2nbb9"&gt;How to Use&amp;nbsp;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;h2 id="mcetoc_1jhfa2nbb8"&gt;Quick start&lt;/h2&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;I set up the matterbridge as follows:&lt;/p&gt;
&lt;p&gt;Install the Matterbridge docker&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Create long-lived access tokens to allow home-assistant-matter-hub docker to interact with your Home Assistant instance.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Communication configure between Matterhub and Home Assistant,Matterbridge to connect to home assistant with url and token&lt;/p&gt;
&lt;p&gt;expose&amp;nbsp; homeassistant device as a matter bridge&lt;/p&gt;
&lt;pre class="language-markup"&gt;&lt;code&gt;open http://192.168.2.125:8482/ via chrome browser

Create a new bridge,

Add device "pattern: switch.air_con" in new bridge

start it to generate a pairing QR code

Connect accessory to Apple Home&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 id="mcetoc_1jhfadikr2"&gt;Install and configure&lt;/h2&gt;
&lt;p&gt;docker-compose.yml&lt;/p&gt;
&lt;p&gt;You need to create an access token in home assistant&amp;nbsp;instance&amp;nbsp;and export it like this:&lt;/p&gt;
&lt;pre class="language-python"&gt;&lt;code&gt;services:
  matter-hub:
    image: ghcr.io/t0bst4r/home-assistant-matter-hub:3.0.1
    restart: unless-stopped
    network_mode: host
    environment: # more options can be found in the configuration section
      - HAMH_HOME_ASSISTANT_URL=http://192.168.2.125:8123/
      - HAMH_HOME_ASSISTANT_ACCESS_TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJhMzcwZDExYjM4MjE0YzFmYThmZTk3NDZjMDQyODU2NSIsImlhdCI6MTc3MTA4MTk1NSwiZXhwIjoyMDg2NDQxOTU1fQ.vhZD-KhJe4XIXd6_XvBE92y4T5W1aICSfBCbTTCvFL4
      - HAMH_LOG_LEVEL=info
      - HAMH_HTTP_PORT=8482
    volumes:
      - /datadocker/home-assistant-matter-hub:/data&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Now you can visit it via web ui&lt;/p&gt;
&lt;p&gt;http://192.168.2.125:8482/&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 id="mcetoc_1jhfa2nbb9"&gt;How to Use&amp;nbsp;&lt;/h2&gt;
&lt;p&gt;expose&amp;nbsp; homeassistant device as a matter bridge&lt;/p&gt;
&lt;p&gt;open http://192.168.2.125:8482/&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Create a new bridge for device,home-assistant-matter-hub docker get it&amp;nbsp;from home assistant via api.&lt;/p&gt;
&lt;p&gt;type:pattern&lt;/p&gt;
&lt;p&gt;value:light.yeelink_cn_ceiling21_s_2_light&lt;/p&gt;
&lt;pre class="language-python"&gt;&lt;code&gt;{
  "name": "matterbridgeceilling21v2",
  "port": 5543,
  "filter": {
    "include": [
      {
        "type": "pattern",
        "value": "light.yeelink_cn_476690814_ceiling21_s_2_light"
      }
    ],
    "exclude": []
  },
  "featureFlags": {
    "coverDoNotInvertPercentage": false,
    "includeHiddenEntities": false
  }
}&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;</description>
  <author>blog.matterxiaomi.com</author>
  <category>matter bridge</category>
  <category>matter bridge</category>
  <guid isPermaLink="false">http://blog.matterxiaomi.com/blog/matter-bridge-part2/</guid>
  <pubDate>Sun, 15 Feb 2026 00:02:19 GMT</pubDate>
</item>
<item>
  <title>Ecovacs in Home Assistant Part6 - Robot Vacuum Control MCP Server</title>
  <link>http://blog.matterxiaomi.com/blog/ecovacs-part6/</link>
  <description>&lt;p&gt;MCP protocol&lt;/p&gt;
&lt;p&gt;https://github.com/ecovacs-ai/ecovacs-mcp/blob/main/ecovacs_mcp/robot_mcp_stdio.py&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;custom integration&lt;/p&gt;
&lt;p&gt;https://github.com/hoangminh1109/ecovacs_cn&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;https://open.ecovacs.com/#/serviceOverview&lt;/p&gt;</description>
  <author>blog.matterxiaomi.com</author>
  <category>vacuum</category>
  <guid isPermaLink="false">http://blog.matterxiaomi.com/blog/ecovacs-part6/</guid>
  <pubDate>Fri, 13 Feb 2026 13:17:29 GMT</pubDate>
</item>
<item>
  <title>ModelScope vs Hugging Face vs k2-fsa.github.io vs Kaldi vs Sherpa</title>
  <link>http://blog.matterxiaomi.com/blog/create-wyoming-server-home-assistant-part5/</link>
  <description>&lt;p&gt;Hugging Face, ModelScope, and k2-fsa.github.io (specifically the k2-fsa/sherpa-onnx project) represent different approaches to the machine learning ecosystem。&lt;/p&gt;
&lt;p&gt;Hugging Face and ModelScope host everything. k2-fsa and Sherpa only do Speech.&lt;/p&gt;
&lt;p&gt;k2-fsa and Sherpa are highly specialized tools focused on speech recognition (ASR) and synthesis (TTS)&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Sherpa (often referred to as sherpa-onnx or sherpa-ncnn) is a lightweight speech-to-text (ASR) and text-to-speech (TTS) engine.&amp;nbsp;Best for: Deploying speech models on edge devices (Android, iOS, WebAssembly, ARM boards) or high-performance servers, prioritizing low latency and CPU efficiency.&lt;/p&gt;
&lt;div class="mce-toc"&gt;
&lt;h2&gt;Table of Contents&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jhchklu53"&gt;Hugging Face（Global AI&amp;nbsp;model platform）&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jhchiqmm1"&gt;ModelScope(The Alibaba/Chinese AI Industrial model platform)&amp;nbsp;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jhchocht5"&gt;specialized tools focused on speech recognition (ASR) and synthesis (TTS)&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jh8e1hud2"&gt;k2-fsa (Next-Gen Kaldi)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jhcht3fl7"&gt;Sherpa(The Real-Time Speech Deployment Tool)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;p&gt;These four entities represent two different categories: Model Ecosystems (Hugging Face &amp;amp; ModelScope) and Speech Recognition Frameworks (Kaldi &amp;amp; k2-fsa).&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 id="mcetoc_1jhchklu53"&gt;&lt;strong&gt;Hugging Face（&lt;/strong&gt;Global AI&amp;nbsp;model platform&lt;strong&gt;）&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;The Global Industry Standard。It supports NLP, computer vision, audio, and multimodal models via transformers and diffusers libraries.&lt;/p&gt;
&lt;p&gt;download models&lt;/p&gt;
&lt;p&gt;https://huggingface.co/models&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;https://huggingface.co/FunAudioLLM/SenseVoiceSmall&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;https://huggingface.co/funasr/paraformer-zh/blame/7904416f6cb6290ee7dc0b2ddb2993a9fe4f421a/README.md&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 id="mcetoc_1jhchiqmm1"&gt;&lt;strong&gt;ModelScope&lt;/strong&gt;&lt;strong&gt;(The Alibaba/Chinese AI Industrial model platform)&amp;nbsp;&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;ModelScope is an AI model hub led by Alibaba DAMO Academy. It provides pre-trained models, pipelines, and deployment tools, especially strong in Chinese language and speech technologies.ModelScope is often described as the "Chinese Hugging Face."&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;https://modelscope.cn/search?search=sherpa%20onnx&lt;/p&gt;
&lt;p&gt;Inference&amp;nbsp; Framework&lt;/p&gt;
&lt;p&gt;1.funasr&lt;/p&gt;
&lt;p&gt;2.funasr-onnx&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;see:https://github.com/modelscope/FunASR?tab=readme-ov-file#sensevoice&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 id="mcetoc_1jhchocht5"&gt;specialized tools focused on speech recognition (ASR) and synthesis (TTS)&lt;/h2&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Kaldi: Speech Toolkit&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The "grandfather" of modern speech recognition. It is a C++ based toolkit developed primarily by Dan Povey.&lt;/p&gt;
&lt;p&gt;demo&lt;/p&gt;
&lt;p&gt;KaldiRecognizer&lt;/p&gt;
&lt;p&gt;https://github.com/rhasspy/wyoming-faster-whisper/blob/main/wyoming_faster_whisper/__main__.py&lt;/p&gt;
&lt;h3 id="mcetoc_1jh8e1hud2"&gt;k2-fsa (Next-Gen Kaldi)&lt;/h3&gt;
&lt;p&gt;&amp;nbsp;Speech Toolkit.The Modern Successor&lt;/p&gt;
&lt;p&gt;What it is: Often called "Next-gen Kaldi." It is a complete rewrite of Kaldi&amp;rsquo;s core concepts to make them natively compatible with &lt;strong&gt;PyTorch&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Key Repositories:&lt;/p&gt;
&lt;p&gt;Icefall: Where the actual training recipes for speech models (like Zipformer) live.&lt;/p&gt;
&lt;p&gt;k2: The core library for differentiable FSTs.the old version of the tool you used before k2-fsa existed.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;download sherpa-onnx&lt;/p&gt;
&lt;p&gt;https://github.com/k2-fsa/sherpa-onnx/releases&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;download sherpa-onnx&amp;nbsp;asr models&lt;/p&gt;
&lt;p&gt;https://k2-fsa.github.io/sherpa/onnx/pretrained_models/index.html&lt;/p&gt;
&lt;p&gt;https://github.com/k2-fsa/sherpa-onnx/releases/tag/asr-models&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 id="mcetoc_1jh9dcjgo1"&gt;&amp;nbsp;download&amp;nbsp;Silero VAD ONNX model&lt;/h4&gt;
&lt;pre class="language-markup"&gt;&lt;code&gt;# https://k2-fsa.github.io/sherpa/onnx/sense-voice/pretrained.html#sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17
wget https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/silero_vad.onnx
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;download url:https://k2-fsa.github.io/sherpa/onnx/vad/silero-vad.html#download-models-files&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 id="mcetoc_1jhcht3fl7"&gt;&lt;strong&gt;Sherpa&lt;/strong&gt;&lt;strong&gt;(The Real-Time Speech Deployment Tool)&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;The deployment engine (CPU/GPU, Android, iOS, WebAssembly).It uses models trained in the k2-fsa ecosystem.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;How they all fit together&lt;/p&gt;
&lt;p&gt;1.k2-fsa is the tool you use to build a high-performance speech model.&lt;/p&gt;
&lt;p&gt;silero-vad model download&lt;/p&gt;
&lt;p&gt;https://k2-fsa.github.io/sherpa/onnx/vad/silero-vad.html#download-models-files&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Once you've trained that model using k2-fsa, you might upload it to Hugging Face or ModelScope so others can download it easily.&lt;/p&gt;
&lt;p&gt;Hugging Face hosts models from both ModelScope and k2-fsa/Sherpa, serving as a distribution point for them.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Layer Relationship&lt;/p&gt;
&lt;pre class="language-markup"&gt;&lt;code&gt;Model Hosting &amp;amp; Distribution
   ├── ModelScope
   └── Hugging Face

Inference / Runtime Framework
   └── k2-fsa / sherpa&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;</description>
  <author>blog.matterxiaomi.com</author>
  <guid isPermaLink="false">http://blog.matterxiaomi.com/blog/create-wyoming-server-home-assistant-part5/</guid>
  <pubDate>Wed, 11 Feb 2026 19:46:43 GMT</pubDate>
</item>
<item>
  <title>Create Wyoming server for Home assistant Part2 - stt -  wyoming-funasr arm64</title>
  <link>http://blog.matterxiaomi.com/blog/create-wyoming-server-home-assistant-part2/</link>
  <description>&lt;p&gt;Wyoming protocol server for the funasr speech to text system.stt -&amp;nbsp; wyoming-funasr arm64&lt;/p&gt;
&lt;p&gt;FunASR: A Fundamental End-to-End Speech Recognition Toolkit.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;To make an STT server work with Home Assistant, the industry standard is using the Wyoming Protocol.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;div class="mce-toc"&gt;
&lt;h2&gt;Table of Contents&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jfpb2v493q"&gt;Step 1.Development Environment Setup &lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#mcetoc_1ji8qhp3m2"&gt;Create Python virtual environment&amp;nbsp;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jfpb4g3942"&gt;Step 2. Install&amp;nbsp;&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jgnflo1m9"&gt;Install torch&amp;nbsp;&amp;nbsp;via PyPI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jfpb2v493s"&gt;Install FunASR 1.3.0&amp;nbsp; via PyPI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&amp;nbsp;&lt;/li&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jfpb2v493u"&gt;Verify installation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jfpb2v493v"&gt;Step3.Download and test a model (example: paraformer-zh)&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jhpjdu7o2"&gt;SenseVoice -&amp;nbsp;Speech Recognition (Non-streaming)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jfpqvmgh2"&gt;Step 4.FunASR + Wyoming STT full server&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jgnflo1ma"&gt;Install wyoming&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jh1j919c1"&gt;Strategies to reduce latency&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;h2 id="mcetoc_1jfpb2v493q"&gt;Step 1.Development Environment Setup&lt;/h2&gt;
&lt;h3 id="mcetoc_1ji8qhp3m2"&gt;Create Python virtual environment&amp;nbsp;&lt;/h3&gt;
&lt;p&gt;mkdir -p&amp;nbsp; /funasr-wyoming&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;cd /funasr-wyoming&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;python3 -m venv venv&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;source venv/bin/activate&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;python --version&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;Python 3.11.2
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;pip3 install wyoming==1.8.0&lt;/p&gt;
&lt;p&gt;pip3 install funasr==1.3.0&lt;/p&gt;
&lt;p&gt;pip3 install torch&lt;/p&gt;
&lt;p&gt;pip3 install torchaudio&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;apt list --installed&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;(venv) root@raspberrypi:/funasr-wyoming# pip3 show funasr
Name: funasr
Version: 1.3.0
Summary: FunASR: A Fundamental End-to-End Speech Recognition Toolkit
Home-page: https://github.com/alibaba-damo-academy/FunASR.git
Author: Speech Lab of Alibaba Group
Author-email: funasr@list.alibaba-inc.com
License: The MIT License
Location: /funasr-wyoming/venv/lib/python3.11/site-packages
Requires: editdistance, hydra-core, jaconv, jamo, jieba, kaldiio, librosa, modelscope, oss2, pytorch_wpe, PyYAML, requests, scipy, sentencepiece, soundfile, tensorboardX, torch_complex, tqdm, umap_learn
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Requirements&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;python&amp;gt;=3.8
torch&amp;gt;=1.13
torchaudio&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 id="mcetoc_1jfpb4g3942"&gt;Step 2. Install&amp;nbsp;&lt;/h2&gt;
&lt;p&gt;(venv) root@raspberrypi:/funasr-wyoming# pip3 --version&lt;/p&gt;
&lt;p&gt;pip 23.0.1 from /funasr-wyoming/venv/lib/python3.11/site-packages/pip (python 3.11)&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 id="mcetoc_1jgnflo1m9"&gt;Install torch&amp;nbsp;&amp;nbsp;via PyPI&lt;/h3&gt;
&lt;p&gt;pip3 install torch==2.1.0&amp;nbsp; &amp;nbsp;(CPU-only)&lt;/p&gt;
&lt;p&gt;output&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;Installing collected packages: mpmath, sympy, networkx, MarkupSafe, fsspec, jinja2, torch
Successfully installed MarkupSafe-3.0.3 fsspec-2026.1.0 jinja2-3.1.6 mpmath-1.3.0 networkx-3.6.1 sympy-1.14.0 torch-2.1.0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;if ffmpeg is not installed. torchaudio is used to load audio&lt;/p&gt;
&lt;p&gt;pip3 install torchaudio==2.1.0&amp;nbsp; &amp;nbsp;(CPU-only)&lt;/p&gt;
&lt;p&gt;output&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;Successfully installed torchaudio-2.1.0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You will need the wyoming and funasr libraries.&lt;/p&gt;
&lt;h3 id="mcetoc_1jfpb2v493s"&gt;Install FunASR 1.3.0&amp;nbsp; via PyPI&lt;/h3&gt;
&lt;p&gt;pip3 install -U funasr==1.3.0&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;This will pull:&lt;/p&gt;
&lt;p&gt;Downloading https://www.piwheels.org/simple/threadpoolctl/threadpoolctl-3.6.0-py3-none-any.whl (18 kB)&lt;/p&gt;
&lt;p&gt;Installing collected packages: jieba, jamo, jaconv, crcmod, antlr4-python3-runtime, urllib3, typing_extensions, tqdm, threadpoolctl, six, sentencepiece, PyYAML, pycryptodome, pycparser, protobuf, platformdirs, packaging, numpy, msgpack, llvmlite, joblib, jmespath, idna, filelock, editdistance, decorator, charset_normalizer, certifi, audioread, torch_complex, tensorboardX, soxr, scipy, requests, pytorch_wpe, omegaconf, numba, lazy_loader, kaldiio, cffi, soundfile, scikit-learn, pooch, modelscope, hydra-core, cryptography, pynndescent, librosa, aliyun-python-sdk-core, umap_learn, aliyun-python-sdk-kms, oss2, funasr&lt;/p&gt;
&lt;p&gt;output&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;Successfully installed PyYAML-6.0.3 aliyun-python-sdk-core-2.16.0 aliyun-python-sdk-kms-2.16.5 antlr4-python3-runtime-4.9.3 audioread-3.1.0 certifi-2026.1.4 cffi-2.0.0 charset_normalizer-3.4.4 crcmod-1.7 cryptography-46.0.3 decorator-5.2.1 editdistance-0.8.1 filelock-3.20.3 funasr-1.3.0 hydra-core-1.3.2 idna-3.11 jaconv-0.4.1 jamo-0.4.1 jieba-0.42.1 jmespath-0.10.0 joblib-1.5.3 kaldiio-2.18.1 lazy_loader-0.4 librosa-0.11.0 llvmlite-0.46.0 modelscope-1.34.0 msgpack-1.1.2 numba-0.63.1 numpy-2.3.5 omegaconf-2.3.0 oss2-2.19.1 packaging-26.0 platformdirs-4.5.1 pooch-1.8.2 protobuf-6.33.4 pycparser-3.0 pycryptodome-3.23.0 pynndescent-0.6.0 pytorch_wpe-0.0.1 requests-2.32.5 scikit-learn-1.8.0 scipy-1.17.0 sentencepiece-0.2.1 six-1.17.0 soundfile-0.13.1 soxr-1.0.0 tensorboardX-2.6.4 threadpoolctl-3.6.0 torch_complex-0.4.4 tqdm-4.67.1 typing_extensions-4.15.0 umap_learn-0.5.11 urllib3-2.6.3
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;detail:https://pypi.org/project/funasr&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;sudo apt install ffmpeg&lt;/p&gt;
&lt;p&gt;output&lt;/p&gt;
&lt;p&gt;ffmpeg is already the newest version (8:5.1.8-0+deb12u1+rpt1).&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;if ffmpeg is not installed. torchaudio is used to load audio&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 id="mcetoc_1jfpb2v493t"&gt;&amp;nbsp;&lt;/h3&gt;
&lt;h3 id="mcetoc_1jfpb2v493u"&gt;Verify installation&lt;/h3&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;python - &amp;lt;&amp;lt; 'EOF'
from funasr import AutoModel
print("FunASR imported OK")
EOF
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;output&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;FunASR imported OK&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 id="mcetoc_1jfpb2v493v"&gt;Step3.Download and test a model (example: paraformer-zh)&lt;/h2&gt;
&lt;p&gt;test.py&lt;/p&gt;
&lt;pre class="language-python"&gt;&lt;code&gt;from funasr import AutoModel

model = AutoModel(
    model="paraformer-zh",
    model_revision="v2.0.4",
    vad_model="fsmn-vad",
    vad_model_revision="v2.0.4",
    punc_model="ct-punc",
    punc_model_revision="v2.0.4",
)

res = model.generate(input="test.wav")
print(res)

res = model.generate(input="https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/vad_example.wav")
print(res)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 id="mcetoc_1jhpjdu7o2"&gt;SenseVoice -&amp;nbsp;Speech Recognition (Non-streaming)&lt;/h3&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;from funasr import AutoModel
from funasr.utils.postprocess_utils import rich_transcription_postprocess

model_dir = "iic/SenseVoiceSmall"

model = AutoModel(
    model=model_dir,
    vad_model="fsmn-vad",
    vad_kwargs={"max_single_segment_time": 30000},
    device="cuda:0",
)

# en
res = model.generate(
    input=f"{model.model_path}/example/en.mp3",
    cache={},
    language="auto",  # "zn", "en", "yue", "ja", "ko", "nospeech"
    use_itn=True,
    batch_size_s=60,
    merge_vad=True,  #
    merge_length_s=15,
)
text = rich_transcription_postprocess(res[0]["text"])
print(text)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;see:https://github.com/modelscope/FunASR?tab=readme-ov-file#sensevoice&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;python3 test.py&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Models are cached in:&lt;/p&gt;
&lt;p&gt;/root/.cache/modelscope/hub/models/iic/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch&lt;/p&gt;
&lt;p&gt;output&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;Downloading Model from https://www.modelscope.cn to directory: /root/.cache/modelscope/hub/models/iic/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch


2026-01-25 08:58:28,492 - modelscope - INFO - Use user-specified model revision: v2.0.4
2026-01-25 08:58:28,595 - modelscope - INFO - Got 11 files, start to download ...
Downloading [fig/res.png]: 100%|███████████████████████████████████████████████████| 192k/192k [00:00&amp;lt;00:00, 386kB/s]
Downloading [am.mvn]: 100%|█████████████████████████████████████████████████████| 10.9k/10.9k [00:00&amp;lt;00:00, 21.7kB/s]
Downloading [example/hotword.txt]: 100%|███████████████████████████████████████████| 7.00/7.00 [00:00&amp;lt;00:00, 11.9B/s]
Downloading [config.yaml]: 100%|████████████████████████████████████████████████| 3.34k/3.34k [00:00&amp;lt;00:00, 5.66kB/s]
Downloading [configuration.json]: 100%|███████████████████████████████████████████████| 478/478 [00:00&amp;lt;00:00, 766B/s]
Downloading [README.md]: 100%|██████████████████████████████████████████████████| 11.3k/11.3k [00:00&amp;lt;00:00, 18.2kB/s]
Downloading [example/asr_example.wav]: 100%|███████████████████████████████████████| 141k/141k [00:00&amp;lt;00:00, 208kB/s]
Downloading [fig/seaco.png]: 100%|█████████████████████████████████████████████████| 167k/167k [00:00&amp;lt;00:00, 296kB/s]
Downloading [tokens.json]: 100%|█████████████████████████████████████████████████| 91.5k/91.5k [00:00&amp;lt;00:00, 165kB/s]
Downloading [seg_dict]: 100%|███████████████████████████████████████████████████| 7.90M/7.90M [00:03&amp;lt;00:00, 2.76MB/s]
Downloading [model.pt]: 100%|█████████████████████████████████████████████████████| 944M/944M [01:31&amp;lt;00:00, 10.8MB/s]
Processing 11 items: 100%|████████████████████████████████████████████████████████| 11.0/11.0 [01:31&amp;lt;00:00, 8.34s/it]
2026-01-25 09:00:00,347 - modelscope - INFO - Download model 'iic/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch' successfully.█████████████████████████████████████████████| 91.5k/91.5k [00:00&amp;lt;00:00, 165kB/s]
WARNING:root:trust_remote_code: False                                            | 5.00M/944M [00:01&amp;lt;02:50, 5.77MB/s]
Downloading [model.pt]:   2%|█▎                                                  | 23.0M/944M [00:03&amp;lt;02:15, 7.12MB/s]
Downloading [model.pt]: 100%|████████████████████████████████████████████████████▉| 942M/944M [01:31&amp;lt;00:00, 6.28MB/s]
Downloading [seg_dict]: 100%|███████████████████████████████████████████████████| 7.90M/7.90M [00:03&amp;lt;00:00, 4.16MB/s&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre class="language-markup"&gt;&lt;code&gt;python3 -c "from funasr import AutoModel; AutoModel(model='paraformer-zh', device='cpu')"&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 id="mcetoc_1jfpqvmgh2"&gt;Step 4.FunASR + Wyoming STT full server&lt;/h2&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 id="mcetoc_1jgnflo1ma"&gt;Install wyoming&lt;/h3&gt;
&lt;p&gt;pip3 install&amp;nbsp;wyoming==1.8.0&lt;/p&gt;
&lt;p&gt;output&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting wyoming==1.5.0
  Downloading wyoming-1.5.0-py3-none-any.whl (23 kB)
Installing collected packages: wyoming
Successfully installed wyoming-1.5.0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;A Wyoming server consists of an AsyncServer and an AsyncEventHandler. The handler processes events like Describe&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;python3 server.py&lt;/p&gt;
&lt;p&gt;You will need the wyoming and funasr libraries.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Describe event&lt;/p&gt;
&lt;p&gt;Listen for a Describe event (to tell HA it's an STT service)&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;AudioStart&amp;nbsp; event&lt;/p&gt;
&lt;p&gt;Ha client-&amp;gt;&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;{
  "type": "audio.start",
  "rate": 16000,
  "width": 2,
  "channels": 1
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;AudioStart.is_type(event.type)&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;AudioChunk event&lt;/p&gt;
&lt;p&gt;The AudioChunk event is where you collect the raw PCM data.Receive AudioChunk events and buffer them.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;AudioStop&amp;nbsp;event&lt;/p&gt;
&lt;p&gt;Trigger the SenseVoice model when AudioStop event is received.AudioStop event is where you trigger the inference.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre class="language-python"&gt;&lt;code&gt;funasr-wyoming# python3 server.py
funasr version: 1.3.0.
Check update of funasr, and it would cost few times. You may disable it by set `disable_update=True` in AutoModel
New version is available: 1.3.1.
Please use the command "pip install -U funasr" to upgrade.
2026-02-19 14:15:51.321 - INFO - root - download models from model hub: ms
Downloading Model from https://www.modelscope.cn to directory: /root/.cache/modelscope/hub/models/iic/SenseVoiceSmall
2026-02-19 14:15:52.591 - WARNING - root - trust_remote_code: False
2026-02-19 14:15:54.679 - INFO - root - Loading pretrained params from /root/.cache/modelscope/hub/models/iic/SenseVoiceSmall/model.pt
2026-02-19 14:15:54.688 - INFO - root - ckpt: /root/.cache/modelscope/hub/models/iic/SenseVoiceSmall/model.pt
2026-02-19 14:16:10.752 - INFO - root - scope_map: ['module.', 'None']
2026-02-19 14:16:10.871 - INFO - root - excludes: None
2026-02-19 14:16:11.383 - INFO - root - Loading ckpt: /root/.cache/modelscope/hub/models/iic/SenseVoiceSmall/model.pt, status: &amp;lt;All keys matched successfully&amp;gt;
2026-02-19 14:16:11.461 - INFO - root - Building VAD model.
2026-02-19 14:16:11.461 - INFO - root - download models from model hub: ms
Downloading Model from https://www.modelscope.cn to directory: /root/.cache/modelscope/hub/models/iic/speech_fsmn_vad_zh-cn-16k-common-pytorch
2026-02-19 14:16:13.308 - WARNING - root - trust_remote_code: False
2026-02-19 14:16:13.652 - INFO - root - Loading pretrained params from /root/.cache/modelscope/hub/models/iic/speech_fsmn_vad_zh-cn-16k-common-pytorch/model.pt
2026-02-19 14:16:13.653 - INFO - root - ckpt: /root/.cache/modelscope/hub/models/iic/speech_fsmn_vad_zh-cn-16k-common-pytorch/model.pt
2026-02-19 14:16:13.959 - INFO - root - scope_map: ['module.', 'None']
2026-02-19 14:16:13.959 - INFO - root - excludes: None
2026-02-19 14:16:13.962 - INFO - root - Loading ckpt: /root/.cache/modelscope/hub/models/iic/speech_fsmn_vad_zh-cn-16k-common-pytorch/model.pt, status: &amp;lt;All keys matched successfully&amp;gt;
2026-02-19 14:16:14.020 - INFO - wyoming-funasr-stt-server - Wyoming STT Server started on port 10850
2026-02-19 14:16:19.596 - INFO - wyoming-funasr-stt-server - Transcribe received
2026-02-19 14:16:19.597 - INFO - wyoming-funasr-stt-server - AudioStart received
2026-02-19 14:16:22.530 - INFO - wyoming-funasr-stt-server - AudioStop received. Processing...  95040 bytes of audio...
2026-02-19 14:16:22.557 - 音频长度: 47520 samples, 2.97 秒
2026-02-19 14:16:22.559 - INFO - wyoming-funasr-stt-server - Audio length: 47520 samples, 2.97 s
rtf_avg: 0.578: 100%|█████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01&amp;lt;00:00,  1.71s/it]
rtf_avg: 1.793: 100%|█████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01&amp;lt;00:00,  1.72s/it]
rtf_avg: 0.581, time_speech:  2.970, time_escape: 1.727: 100%|████████████████████████████████████████████| 1/1 [00:01&amp;lt;00:00,  1.73s/it]
2026-02-19 14:16:26.056 - 过滤掉 SenseVoice 可能输出的情感/事件标签
2026-02-19 14:16:26.056 - INFO - wyoming-funasr-stt-server - 识别结果原文：可能输出的情感/事件标签- Result: &amp;lt;|zh|&amp;gt;&amp;lt;|NEUTRAL|&amp;gt;&amp;lt;|Speech|&amp;gt;&amp;lt;|withitn|&amp;gt;换货。
2026-02-19 14:16:26.057 - INFO - wyoming-funasr-stt-server - Result: 换货。
2026-02-19 14:16:26.057 - 识别结果: 换货。
2026-02-19 14:16:35.711 - INFO - wyoming-funasr-stt-server - Transcribe received
2026-02-19 14:16:35.711 - INFO - wyoming-funasr-stt-server - AudioStart received
2026-02-19 14:16:38.141 - INFO - wyoming-funasr-stt-server - AudioStop received. Processing...  78400 bytes of audio...
2026-02-19 14:16:38.143 - 音频长度: 39200 samples, 2.45 秒
2026-02-19 14:16:38.143 - INFO - wyoming-funasr-stt-server - Audio length: 39200 samples, 2.45 s
rtf_avg: 0.028: 100%|█████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00&amp;lt;00:00, 14.81it/s]
rtf_avg: 1.693: 100%|█████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01&amp;lt;00:00,  1.53s/it]
rtf_avg: 0.624, time_speech:  2.450, time_escape: 1.530: 100%|████████████████████████████████████████████| 1/1 [00:01&amp;lt;00:00,  1.53s/it]
2026-02-19 14:16:39.742 - 过滤掉 SenseVoice 可能输出的情感/事件标签
2026-02-19 14:16:39.742 - INFO - wyoming-funasr-stt-server - 识别结果原文：可能输出的情感/事件标签- Result: &amp;lt;|zh|&amp;gt;&amp;lt;|NEUTRAL|&amp;gt;&amp;lt;|Speech|&amp;gt;&amp;lt;|withitn|&amp;gt;关火。
2026-02-19 14:16:39.742 - INFO - wyoming-funasr-stt-server - Result: 关火。
2026-02-19 14:16:39.742 - 识别结果: 关火。
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;pip3 list&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;Package                Version
---------------------- --------
aliyun-python-sdk-core 2.16.0
aliyun-python-sdk-kms  2.16.5
antlr4-python3-runtime 4.9.3
audioread              3.1.0
certifi                2026.1.4
cffi                   2.0.0
charset-normalizer     3.4.4
crcmod                 1.7
cryptography           46.0.3
decorator              5.2.1
editdistance           0.8.1
filelock               3.20.3
fsspec                 2026.1.0
funasr                 1.3.0
hydra-core             1.3.2
idna                   3.11
ifaddr                 0.2.0
jaconv                 0.4.1
jamo                   0.4.1
jieba                  0.42.1
Jinja2                 3.1.6
jmespath               0.10.0
joblib                 1.5.3
kaldiio                2.18.1
lazy_loader            0.4
librosa                0.11.0
llvmlite               0.46.0
MarkupSafe             3.0.3
modelscope             1.34.0
mpmath                 1.3.0
msgpack                1.1.2
networkx               3.6.1
numba                  0.63.1
numpy                  1.26.4
omegaconf              2.3.0
oss2                   2.19.1
packaging              26.0
pip                    23.0.1
platformdirs           4.5.1
pooch                  1.8.2
protobuf               6.33.4
pycparser              3.0
pycryptodome           3.23.0
pynndescent            0.6.0
pytorch-wpe            0.0.1
PyYAML                 6.0.3
requests               2.32.5
scikit-learn           1.8.0
scipy                  1.17.0
sentencepiece          0.2.1
setuptools             66.1.1
six                    1.17.0
soundfile              0.13.1
soxr                   1.0.0
sympy                  1.14.0
tensorboardX           2.6.4
threadpoolctl          3.6.0
torch                  2.1.0
torch_complex          0.4.4
torchaudio             2.1.0
tqdm                   4.67.1
typing_extensions      4.15.0
umap-learn             0.5.11
urllib3                2.6.3
wyoming                1.8.0
zeroconf               0.148.0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 id="mcetoc_1jh1j919c1"&gt;Strategies to reduce latency&lt;/h2&gt;
&lt;p&gt;1.Use smaller models&lt;/p&gt;
&lt;p&gt;FunASR has paraformer-zh-small or paraformer-zh-medium&lt;/p&gt;
&lt;p&gt;2.VAD pre-filtering&lt;/p&gt;
&lt;p&gt;Skip silence chunks &amp;rarr; speech &amp;rarr; Skip silence chunks&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;</description>
  <author>blog.matterxiaomi.com</author>
  <guid isPermaLink="false">http://blog.matterxiaomi.com/blog/create-wyoming-server-home-assistant-part2/</guid>
  <pubDate>Thu, 05 Feb 2026 17:55:26 GMT</pubDate>
</item>
<item>
  <title>Create Wyoming server for Home assistant Part3 - stt -  wyoming-funasr arm64 sherpa-onnx</title>
  <link>http://blog.matterxiaomi.com/blog/create-wyoming-server-home-assistant-part3/</link>
  <description>&lt;p&gt;stt -&amp;nbsp; wyoming-funasr arm64 onnx&lt;/p&gt;
&lt;p&gt;Wyoming asr&amp;nbsp; server for Home assistant: Step-by-Step Guide for Developers&lt;/p&gt;
&lt;p&gt;Building FunASR with sherpa-onnx on an ARM64 (aarch64) system.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;To make an STT server work with Home Assistant, the industry standard is using the Wyoming Protocol.&lt;/p&gt;
&lt;p&gt;sherpa-onnx: The "engine" that runs the stt. It manages the model files and utilizes the RPi5&amp;rsquo;s CPU.&lt;/p&gt;
&lt;p&gt;Local Control: You use the Wyoming Protocol Integration&amp;nbsp; bridge to link the stt server to Home Assistant's "Assist" pipeline.&lt;/p&gt;
&lt;div class="mce-toc"&gt;
&lt;h2&gt;Table of Contents&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jgp9detg3"&gt;Quick start&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jgp96qm81"&gt;Create Python virtual environment&amp;nbsp;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jgj084196"&gt;Install sherpa-onnx&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jgj084197"&gt;Install&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jgj0hqlhe"&gt;Download Model&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jglk9fvq1"&gt;download pre-trained&amp;nbsp; models(SenseVoice)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jgjc7k9k1"&gt;server.py&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jgjc8q7q3"&gt;step 1. install&amp;nbsp;numpy&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 id="mcetoc_1jgp9detg3"&gt;Quick start&lt;/h2&gt;
&lt;p&gt;#&amp;nbsp;Install sherpa-onnx&lt;/p&gt;
&lt;p&gt;pip3 install sherpa-onnx sherpa-onnx-bin&lt;/p&gt;
&lt;p&gt;pip3 install wyoming==1.8.0&lt;/p&gt;
&lt;p&gt;pip3 install numpy&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 id="mcetoc_1jgp96qm81"&gt;Create Python virtual environment&amp;nbsp;&lt;/h2&gt;
&lt;p&gt;mkdir -p&amp;nbsp; /funasr-wyoming-sherpa-onnx&lt;/p&gt;
&lt;p&gt;&amp;nbsp;cd /funasr-wyoming-sherpa-onnx&lt;/p&gt;
&lt;p&gt;&amp;nbsp;python3 -m venv venv&lt;/p&gt;
&lt;p&gt;&amp;nbsp;source venv/bin/activate&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 id="mcetoc_1jgj084196"&gt;Install sherpa-onnx&lt;/h2&gt;
&lt;h3 id="mcetoc_1jgj084197"&gt;Install&lt;/h3&gt;
&lt;pre class="language-python"&gt;&lt;code&gt;pip3 install sherpa-onnx sherpa-onnx-bin
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;Installing collected packages: sherpa-onnx-core, sherpa-onnx-bin, sherpa-onnx
Successfully installed sherpa-onnx-1.12.23 sherpa-onnx-bin-1.12.23 sherpa-onnx-core-1.12.23&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;https://k2-fsa.github.io/sherpa/onnx/python/install.html#method-1-from-pre-compiled-wheels-cpu-only&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre class="language-markup"&gt;&lt;code&gt;# pip3 show sherpa-onnx&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;output&lt;/p&gt;
&lt;pre class="language-markup"&gt;&lt;code&gt;Name: sherpa_onnx
Version: 1.12.23
Summary: 
Home-page: https://github.com/k2-fsa/sherpa-onnx
Author: The sherpa-onnx development team
Author-email: dpovey@gmail.com
License: Apache licensed, as found in the LICENSE file
Location: /funasr-wyoming-sherpa-onnx/venv/lib/python3.11/site-packages&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 id="mcetoc_1jgj0hqlhe"&gt;Download Model&lt;/h2&gt;
&lt;p&gt;This section describes how to download pre-trained SenseVoice models.&lt;/p&gt;
&lt;h3 id="mcetoc_1jglk9fvq1"&gt;download pre-trained&amp;nbsp; models(SenseVoice)&lt;/h3&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;cd /funasr-wyoming-sherpa-onnx



wget https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-sense-voice-zh-en-ja-ko-yue-int8-2025-09-09.tar.bz2

tar xvf sherpa-onnx-sense-voice-zh-en-ja-ko-yue-int8-2025-09-09.tar.bz2

rm sherpa-onnx-sense-voice-zh-en-ja-ko-yue-int8-2025-09-09.tar.bz2&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;or&lt;/p&gt;
&lt;pre class="language-python"&gt;&lt;code&gt;# 安装 ModelScope
pip install modelscope

# SDK 模型下载
from modelscope import snapshot_download
model_dir = snapshot_download('xiaowangge/sherpa-onnx-sense-voice-small')&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;output&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;root@raspberrypi:/funasr-wyoming-sherpa-onnx# tree -L 2
.
├── sherpa-onnx-sense-voice-zh-en-ja-ko-yue-int8-2025-09-09
│   ├── model.int8.onnx
│   ├── README.md
│   ├── test_wavs
│   └── tokens.txt
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 id="mcetoc_1jgjc7k9k1"&gt;server.py&lt;/h2&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 id="mcetoc_1jgjc8q7q3"&gt;step 1. install&amp;nbsp;numpy&lt;/h3&gt;
&lt;pre class="language-python"&gt;&lt;code&gt;pip3 install numpy
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;output&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;Installing collected packages: numpy
Successfully installed numpy-2.4.2
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre class="language-python"&gt;&lt;code&gt;pip3 install wyoming==1.8.0&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;output&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;  Downloading https://www.piwheels.org/simple/wyoming/wyoming-1.8.0-py3-none-any.whl (39 kB)
Installing collected packages: wyoming
Successfully installed wyoming-1.8.0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;cd /funasr-wyoming-sherpa-onnx&lt;/p&gt;
&lt;p&gt;source venv/bin/activate&lt;/p&gt;
&lt;p&gt;python3 server.py&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;useful links&lt;/p&gt;
&lt;p&gt;how to download pre-trained SenseVoice models.&lt;/p&gt;
&lt;p&gt;https://k2-fsa.github.io/sherpa/onnx/sense-voice/index.html&lt;/p&gt;
&lt;p&gt;https://k2-fsa.github.io/sherpa/onnx/sense-voice/pretrained.html#sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;https://k2-fsa.github.io/sherpa/onnx/sense-voice/pretrained.html#sherpa-onnx-sense-voice-zh-en-ja-ko-yue-int8-2025-09-09-chinese-english-japanese-korean-cantonese&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;</description>
  <author>blog.matterxiaomi.com</author>
  <guid isPermaLink="false">http://blog.matterxiaomi.com/blog/create-wyoming-server-home-assistant-part3/</guid>
  <pubDate>Wed, 04 Feb 2026 00:53:12 GMT</pubDate>
</item>
<item>
  <title>How to repeat an action until another (specific) device trigger is received in home assistant</title>
  <link>http://blog.matterxiaomi.com/blog/home-assistant-automation-part1/</link>
  <description>&lt;p&gt;How to repeat an action until another (specific) device trigger is received in home assistant&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;To repeat an action in Home Assistant until a specific device trigger is received, use a repeat-until loop combined with a wait_for_trigger action.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;span style="color: #222222; font-family: 'Open Sans', Ubuntu, 'Nimbus Sans L', Avenir, AvenirNext, 'Segoe UI', Helvetica, Arial, sans-serif; font-size: 19px;"&gt;until&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;img src="/Posts/files/repeat-until-trigger-ed-_639055750356970042.jpg" alt="repeat-until-trigger-ed-.jpg" width="1020" height="805" /&gt;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;img src="/Posts/files/repeat-until-trigger-id-2_639055750357620317.jpg" alt="repeat-until-trigger-id-2.jpg" width="1569" height="899" /&gt;&lt;/p&gt;
&lt;pre class="language-python"&gt;&lt;code&gt;alias: New automation find my mobile phone - 找手机
description: &amp;gt;-
  - http://localhost:4999/boards/topic/16760/notify - ## 
  http://localhost:4999/boards/topic/16760 - ##
  http://192.168.2.125:8123/config/automation/edit/1747045045725
triggers:
  - trigger: conversation
    command: "[找手机|手机|手机在哪儿|我手机在哪儿]"
    id: id_find_my_phone
  - trigger: state
    entity_id:
      - binary_sensor.sm_g9910_private_interactive
    from:
      - "off"
    to:
      - "on"
    id: id_phone_interactive_turn_on
    enabled: true
conditions: []
actions:


  - repeat:
      until:
        - condition: trigger
          id:
            - id_phone_interactive_turn_on
      sequence:
        - action: script.turn_on
          metadata: {}
          data: {}
          target:
            entity_id: script.find_my_phone
        - delay:
            hours: 0
            minutes: 0
            seconds: 5
            milliseconds: 0
    enabled: false
 

mode: single
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;way 2.&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;sequence:
  - variables:
      stopVar: false
  - parallel:
      - repeat:
          until:
            - condition: template
              value_template: "{{ stopVar == true }}"
          sequence:
            - delay:
                hours: 0
                minutes: 0
                seconds: 1
                milliseconds: 0
            - action: input_boolean.toggle
              metadata: {}
              data: {}
              target:
                entity_id: input_boolean.quick_toggle
      - sequence:
          - wait_for_trigger:
              - trigger: state
                entity_id:
                  - button.push
          - variables:
              stopVar: true
alias: Parallel Test
description: ""&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;</description>
  <author>blog.matterxiaomi.com</author>
  <category>automation</category>
  <guid isPermaLink="false">http://blog.matterxiaomi.com/blog/home-assistant-automation-part1/</guid>
  <pubDate>Sat, 31 Jan 2026 13:53:45 GMT</pubDate>
</item>
<item>
  <title>How to Install nextcloud  on raspberry pi(Without Domain)</title>
  <link>http://blog.matterxiaomi.com/blog/raspberry-pi-part8/</link>
  <description>&lt;p&gt;How to Install nextcloud on raspberry pi.&lt;/p&gt;
&lt;p&gt;Normal Docker Works Without Domain.Local processing.&lt;/p&gt;
&lt;div class="mce-toc"&gt;
&lt;h2&gt;Table of Contents&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jck97s4va"&gt;Quick Start&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jceujd8d5"&gt;Part 1.Installing Nextcloud on a Raspberry Pi with docker&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jceujd8d6"&gt;part 2.Use Windows folder&amp;nbsp; share as external storage in Nextcloud&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jckbh49c1"&gt;Backups&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;h2 id="mcetoc_1jck97s4va"&gt;Quick Start&lt;/h2&gt;
&lt;p&gt;Raspberry Pi OS Lite (Debian 12 bookworm) Setup&lt;/p&gt;
&lt;p&gt;Install Type：docker Nextcloud&lt;/p&gt;
&lt;p&gt;Nextcloud + Docker + RPi + Windows SMB,&amp;nbsp; SQLite.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Access Nextcloud via&amp;nbsp;http://{raspberry-pi-ip}:8120&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;I installed NextCloud Docker Image/Container on my Raspberry. Everything works fine.&lt;/p&gt;
&lt;h2 id="mcetoc_1jceujd8d5"&gt;Part 1.Installing Nextcloud on a Raspberry Pi with docker&lt;/h2&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;step 1.Run Nextcloud (SQLite, NO DB container,Works via IP)&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Note&lt;/p&gt;
&lt;p&gt;Uses SQLite by default(&amp;nbsp;No MariaDB / PostgreSQL)&lt;/p&gt;
&lt;p&gt;Works via IP(No domain / HTTPS required)&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Viewing the Nextcloud configuration (config.php)&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;# Inside container
docker exec -it nextcloud bash

# Check config:
grep dbtype config/config.php
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;output&lt;/p&gt;
&lt;pre class="language-markup"&gt;&lt;code&gt;'dbtype' =&amp;gt; 'sqlite3',&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;step 2.First-Time Setup (Web UI)&lt;/p&gt;
&lt;p&gt;http://&amp;lt;RPI-IP&amp;gt;:8280&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;img src="/Posts/files/Docker-Nextcloud-2-1_639013271158105350.png" alt="Docker-Nextcloud-2-1.png" width="353" height="740" /&gt;&lt;/p&gt;
&lt;p&gt;Note&lt;/p&gt;
&lt;p&gt;Database:SQLite is selected&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 id="mcetoc_1jceujd8d6"&gt;part 2.Use Windows folder&amp;nbsp; share as external storage in Nextcloud&lt;/h2&gt;
&lt;p&gt;Mount shared folder on Windows10 into a folder in your nextcloud drive and use it as data storage.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Docker NextCloud connection via SMB to Windows 10&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Create a shared folder on Windows10&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Auto-Mount at Boot on rpi5&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Mount Windows shared folder into Nextcloud&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 id="mcetoc_1jckbh49c1"&gt;Backups&lt;/h2&gt;
&lt;p&gt;Files are located in the NextCloud folder, the data subfolder.&lt;/p&gt;
&lt;p&gt;For example, the admin user, is probably in /var/www/html/nextcloud/data/admin/files/&lt;/p&gt;
&lt;p&gt;make a backup of /var/www/html/nextcloud/data .&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;useful links&lt;/p&gt;
&lt;p&gt;https://github.com/nextcloud/docker&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;http://localhost:4999/boards/topic/48316/nextcloud/page/3#82156&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;</description>
  <author>blog.matterxiaomi.com</author>
  <category>raspberry pi</category>
  <category>docker</category>
  <guid isPermaLink="false">http://blog.matterxiaomi.com/blog/raspberry-pi-part8/</guid>
  <pubDate>Sun, 14 Dec 2025 16:38:35 GMT</pubDate>
</item>
<item>
  <title>Global Variables in HomeAssistant part8 - Creating devices via MQTT templates</title>
  <link>http://blog.matterxiaomi.com/blog/global-variables-homeassistant-part8/</link>
  <description>&lt;p&gt;defining MQTT-based sensors ,&amp;nbsp; group a bunch of sensors as a device.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;mqtt:
  binary_sensor:
  - state_topic: rtl_433/Acurite-986/5197/battery_ok
    json_attributes_topic: rtl_433/Acurite-986/5197
    device_class: battery
    name: "Chest freezer temperature battery state"
    unique_id: chest_freezer_temperature_battery_state
    payload_on: '0'
    payload_off: '1'
    device:
      identifiers: 986-5197
      manufacturer: AcuRite
      name: Chest freezer temperature
      model: 986
      suggested_area: Green bedroom
  sensor:
  - state_topic: rtl_433/Acurite-986/5197/temperature_C
    json_attributes_topic: rtl_433/Acurite-986/5197
    device_class: temperature
    name: "Chest freezer temperature"
    unique_id: chest_freezer_temperature
    unit_of_measurement: "&amp;deg;C"
    value_template: '{{ value | round(0) }}'
    state_class: "measurement"
    device:
      identifiers: 986-5197
      manufacturer: AcuRite
      name: Chest freezer temperature
      model: 986
      suggested_area: Green bedroom&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Once defined in YAML, reload the configuration and you'll notice your entities are now groups as a device.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;</description>
  <author>blog.matterxiaomi.com</author>
  <category>global variables</category>
  <guid isPermaLink="false">http://blog.matterxiaomi.com/blog/global-variables-homeassistant-part8/</guid>
  <pubDate>Thu, 04 Dec 2025 21:44:17 GMT</pubDate>
</item>
<item>
  <title>Set attributes only</title>
  <link>http://blog.matterxiaomi.com/blog/global-variables-homeassistant-part7/</link>
  <description>&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;How to change an attribute of an entity within an automation（）&lt;/p&gt;
&lt;p&gt;a python_script can be used to set the state and attributes of any entity programmatically.&lt;/p&gt;
&lt;p&gt;You can also add or modify attributes associated with the entity in the "Attributes" field&lt;/p&gt;
&lt;div class="mce-toc"&gt;
&lt;h2&gt;Table of Contents&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jau0k6tp1"&gt;Enable python script integration in configuration.yaml&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;h2 id="mcetoc_1jau0k6tp1"&gt;Enable python script integration in configuration.yaml&lt;/h2&gt;
&lt;p&gt;Add the component to your configuration.yaml file.&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;python_script:&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Python script&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;# /config/python_scripts/update_entity_attributes.py
# author：matterxiaomi
# blog：

entity_id = data.get('entity_id')
attributes_to_update = data.get('attributes')

if entity_id is None:
    logger.error("entity_id is required for updating attributes.")
elif attributes_to_update is None:
    logger.warning("No attributes provided to update for entity: %s", entity_id)
else:
    # Get current state and attributes
    current_state_object = hass.states.get(entity_id)

    if current_state_object:
        current_state = current_state_object.state
        current_attributes = dict(current_state_object.attributes) # Create a mutable copy

        # Update only the specified attributes
        current_attributes.update(attributes_to_update)

        # Set the state back with updated attributes, preserving the original state
        hass.states.set(entity_id, current_state, current_attributes)
        logger.info("Attributes updated for entity %s", entity_id)
    else:
        logger.warning("Entity %s not found. Cannot update attributes.", entity_id)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Explanation:&lt;/p&gt;
&lt;p&gt;1.Set attributes only&lt;/p&gt;
&lt;p&gt;2.The existing state will be preserved unless explicitly provided in the state parameter.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Developer tools&amp;nbsp;&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;action: python_script.update_entity_attributes
data: 
  entity_id: sensor.my_map
  attributes:
    friendly_name: new_value
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;use this script in an automation or another script:&lt;/p&gt;
&lt;pre class="language-python"&gt;&lt;code&gt;- service: python_script.update_entity_attributes
  data:
    entity_id: sensor.my_entity
    attributes:
      new_attribute_1: value_1
      another_attribute: new_value&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Explanation:&lt;/p&gt;
&lt;p&gt;To update only attributes of an entity in Home Assistant using a Python script, you can leverage the hass.states.set() function, passing in the entity_id and the new_attributes you want to set. The existing state will be preserved unless explicitly provided in the state parameter.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;homeassistant.update_entity&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;script:
  update_travel_time:
    alias: "Manually update travel time sensor"
    sequence:
      - service: homeassistant.update_entity
        target:
          entity_id: sensor.google_travel_time_home_work # Replace with your sensor's entity ID&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;mkdir config/python_scripts
cat &amp;lt;&amp;lt;'EOF' &amp;gt; config/python_scripts/set_state.py
entity = data['entity_id']
hass.states.set(entity, data['state'])
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Set State via Button&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;square: true
type: grid
cards:
  - show_name: true
    show_icon: true
    name: sleep
    type: button
    tap_action:
      action: call-service
      service: python_script.set_state
      data:
        entity_id: sensor.baby
        state: sleep
      target: {}
    icon: mdi:baby-face-outline&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;useful links&lt;/p&gt;
&lt;p&gt;Python script for Home Assistant to handle state and attributes of existing sensors and entities.&lt;/p&gt;
&lt;p&gt;https://github.com/pmazz/ps_hassio_entities&lt;/p&gt;</description>
  <author>blog.matterxiaomi.com</author>
  <category>global variables</category>
  <guid isPermaLink="false">http://blog.matterxiaomi.com/blog/global-variables-homeassistant-part7/</guid>
  <pubDate>Sun, 23 Nov 2025 15:37:31 GMT</pubDate>
</item>
<item>
  <title>Ha Network Storage</title>
  <link>http://blog.matterxiaomi.com/blog/webdav/</link>
  <description>&lt;p&gt;The External Storage Support application enables you to mount external storage services and devices as secondary HA storage devices.&amp;nbsp;&lt;/p&gt;
&lt;div class="mce-toc"&gt;
&lt;h2&gt;Table of Contents&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jcskk1c72"&gt;Client&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#mcetoc_1jcskkm375"&gt;rpi5 -&amp;gt; smb folder share on windows 10&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;p&gt;storage server and protocol&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;storage client and protocol&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;# protocol：
# - SMB: smb://192.168.1.100/PhoneShare
# - FTP: ftp://192.168.1.100:21
# - WebDAV: http://192.168.1.100:8080/
# - SFTP: sftp://192.168.1.100:22
# - NFS: smount -t nfs 192.168.1.10:/home/share /mnt&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;table style="border-collapse: collapse; width: 211.133%; height: 378px;" border="1"&gt;
&lt;tbody&gt;
&lt;tr style="height: 18px;"&gt;
&lt;td style="height: 18px; width: 50%;" colspan="4"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="height: 18px; width: 44.2685%;" colspan="4"&gt;ha on rpi5&lt;/td&gt;
&lt;td style="width: 51.037%; height: 18px;"&gt;Scene&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style="height: 18px;"&gt;
&lt;td style="width: 50%; height: 18px;" colspan="4"&gt;Server&lt;/td&gt;
&lt;td style="height: 18px; width: 44.2685%;" colspan="4"&gt;Client&lt;/td&gt;
&lt;td style="width: 51.037%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style="height: 18px;"&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;OS Type&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;Inside OS&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;sharing protocol&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;Inside OS&lt;/td&gt;
&lt;td style="height: 18px; width: 31.7685%;" colspan="3"&gt;Inside HA&lt;/td&gt;
&lt;td style="width: 51.037%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style="height: 18px;"&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;local storage server&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 6.7685%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 51.037%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style="height: 54px;"&gt;
&lt;td style="width: 12.5%; height: 54px;"&gt;windows 10&lt;/td&gt;
&lt;td style="width: 12.5%; height: 54px;"&gt;Install IIS WebDAV Server&lt;/td&gt;
&lt;td style="width: 12.5%; height: 54px;"&gt;Web server&lt;/td&gt;
&lt;td style="width: 12.5%; height: 54px;"&gt;webdav Storage&lt;/td&gt;
&lt;td style="width: 12.5%; height: 54px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 54px;"&gt;&lt;a href="https://www.home-assistant.io/integrations/webdav"&gt;webdav integration&lt;/a&gt;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 54px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 6.7685%; height: 54px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 51.037%; height: 54px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style="height: 18px;"&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 6.7685%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 51.037%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style="height: 36px;"&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;windows 10&lt;/td&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;Install&amp;nbsp;&lt;span style="color: #444444; font-family: Arial, Helvetica, sans-serif;"&gt;NFS server&lt;/span&gt;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;&lt;span style="color: #444444; font-family: Arial, Helvetica, sans-serif;"&gt;NFS Storage&lt;/span&gt;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;Insall nfs-common&lt;/td&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 6.7685%; height: 36px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 51.037%; height: 36px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style="height: 36px;"&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;windows 10&lt;/td&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;SMB Storage&lt;/td&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;SMB Storage&lt;/td&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;Install smbclient cifs-utils&lt;/td&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 6.7685%; height: 36px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 51.037%; height: 36px;"&gt;ha Use windows share folder as data storage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style="height: 18px;"&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 6.7685%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 51.037%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style="height: 18px;"&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;FTP/FTPS&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 6.7685%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 51.037%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style="height: 18px;"&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;linux&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;SFTP Server&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp; SFTP over SSH(SSH File transfer protocol)&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;SFTP Storage&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 6.7685%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 51.037%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style="height: 36px;"&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;linux&lt;/td&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;Install NFS server&lt;/td&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;sudo mount -t nfs 192.168.1.10:/home/share /mnt&lt;/code&gt;&lt;/pre&gt;
&lt;/td&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 6.7685%; height: 36px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 51.037%; height: 36px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style="height: 18px;"&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 6.7685%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 51.037%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style="height: 18px;"&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;Usb&amp;nbsp;external storage&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 6.7685%; height: 18px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 51.037%; height: 18px;"&gt;&lt;a href="https://www.amazon.com/dp/B083ZS4HYD/?tag=rpitips-20&amp;amp;th=1"&gt;buy&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style="height: 36px;"&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;remote cloud storage&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;Amazon S3，Dropbox, Google Drive, Amazon&lt;/td&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 12.5%; height: 36px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 6.7685%; height: 36px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style="width: 51.037%; height: 36px;"&gt;&amp;nbsp;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 id="mcetoc_1jcskk1c72"&gt;Client&lt;/h2&gt;
&lt;p&gt;Client on rpi5（OS）&lt;/p&gt;
&lt;h3 id="mcetoc_1jcskkm375"&gt;rpi5 -&amp;gt; smb folder share on windows 10&lt;/h3&gt;
&lt;p&gt;SMB - use cifs&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;sudo mount -t cifs -o username=your_username,password=your_password //server_ip/share_name /mnt/smb_share&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;SMB - not use cifs&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;smbclient -L //192.168.2.21 -U test2

output

	Sharename       Type      Comment
	---------       ----      -------

	nextcloud       Disk      
	shareforgiteagitserver Disk     &lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre class="language-csharp"&gt;&lt;code&gt;smbclient //192.168.2.21/nextcloud -U username

output

smb: \&amp;gt; l
  .                                   D        0  Mon Dec 15 03:50:39 2025
  ..                                  D        0  Mon Dec 15 03:50:39 2025
  data                                D        0  Mon Dec 15 03:55:30 2025

		732562175 blocks of size 4096. 91858085 blocks available
smb: \&amp;gt; 
&lt;/code&gt;&lt;/pre&gt;</description>
  <author>blog.matterxiaomi.com</author>
  <guid isPermaLink="false">http://blog.matterxiaomi.com/blog/webdav/</guid>
  <pubDate>Sun, 23 Nov 2025 15:16:36 GMT</pubDate>
</item></channel>
</rss>