forked from vikhyat/moondream
-
Notifications
You must be signed in to change notification settings - Fork 15
Open
Description
ComfyUI Error Report
Error Details
- Node ID: 39
- Node Type: MoondreamQuery
- Exception Type: AttributeError
- Exception Message: 'DynamicCache' object has no attribute 'get_max_length'
Stack Trace
File "E:\zsy\ComfyUI\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "E:\zsy\ComfyUI\execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "E:\zsy\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "E:\zsy\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "E:\zsy\ComfyUI\custom_nodes\ComfyUI-moondream-main\nodes.py", line 81, in process
answer = self.moondream.answer_question(image_embeds, question, self.tokenizer, max_new_tokens)
File "E:\zsy\ComfyUI\custom_nodes\ComfyUI-moondream-main\moondream\moondream.py", line 93, in answer_question
answer = self.generate(
File "E:\zsy\ComfyUI\custom_nodes\ComfyUI-moondream-main\moondream\moondream.py", line 76, in generate
output_ids = self.text_model.generate(
File "E:\zsy\ComfyUI\python\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "E:\zsy\ComfyUI\python\lib\site-packages\transformers\generation\utils.py", line 2326, in generate
result = self._sample(
File "E:\zsy\ComfyUI\python\lib\site-packages\transformers\generation\utils.py", line 3279, in _sample
model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
File "E:\zsy\ComfyUI\custom_nodes\ComfyUI-moondream-main\moondream\modeling_phi.py", line 1158, in prepare_inputs_for_generation
max_cache_length = past_key_values.get_max_length()
System Information
- ComfyUI Version: v0.3.4
- Arguments: E:\zsy\ComfyUI\main.py --listen --auto-launch --preview-method auto --disable-cuda-malloc
- OS: nt
- Python Version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
- Embedded Python: false
- PyTorch Version: 2.5.1+cu124
Devices
- Name: cuda:0 NVIDIA RTX A5000 : cudaMallocAsync
- Type: cuda
- VRAM Total: 25756696576
- VRAM Free: 20593122848
- Torch VRAM Total: 3791650816
- Torch VRAM Free: 36838944
Logs
2025-04-07T19:49:38.678335 - [START] Security scan2025-04-07T19:49:38.678335 -
2025-04-07T19:49:42.524103 - [DONE] Security scan2025-04-07T19:49:42.524103 -
2025-04-07T19:49:42.647103 - ## ComfyUI-Manager: installing dependencies done.2025-04-07T19:49:42.647103 -
2025-04-07T19:49:42.647103 - ** ComfyUI startup time:2025-04-07T19:49:42.647103 - 2025-04-07T19:49:42.647103 - 2025-04-07 19:49:42.6471032025-04-07T19:49:42.647103 -
2025-04-07T19:49:42.647103 - ** Platform:2025-04-07T19:49:42.647103 - 2025-04-07T19:49:42.647103 - Windows2025-04-07T19:49:42.647103 -
2025-04-07T19:49:42.647103 - ** Python version:2025-04-07T19:49:42.647103 - 2025-04-07T19:49:42.647103 - 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]2025-04-07T19:49:42.647103 -
2025-04-07T19:49:42.647103 - ** Python executable:2025-04-07T19:49:42.648099 - 2025-04-07T19:49:42.648099 - E:\zsy\ComfyUI\python\python.exe2025-04-07T19:49:42.648099 -
2025-04-07T19:49:42.648099 - ** ComfyUI Path:2025-04-07T19:49:42.648099 - 2025-04-07T19:49:42.648099 - E:\zsy\ComfyUI2025-04-07T19:49:42.648099 -
2025-04-07T19:49:42.648099 - ** Log path:2025-04-07T19:49:42.648099 - 2025-04-07T19:49:42.648099 - E:\zsy\ComfyUI\comfyui.log2025-04-07T19:49:42.648099 -
2025-04-07T19:49:59.418294 - [manager-core] 'opencv' dependencies were fixed: ['opencv-contrib-python', 'opencv-python', 'opencv-python-headless']2025-04-07T19:49:59.418294 -
2025-04-07T19:49:59.422295 -
Prestartup times for custom nodes:2025-04-07T19:49:59.422295 -
2025-04-07T19:49:59.422295 - 0.0 seconds:2025-04-07T19:49:59.422295 - 2025-04-07T19:49:59.422295 - E:\zsy\ComfyUI\custom_nodes\rgthree-comfy2025-04-07T19:49:59.422295 -
2025-04-07T19:49:59.422295 - 0.0 seconds:2025-04-07T19:49:59.422295 - 2025-04-07T19:49:59.422295 - E:\zsy\ComfyUI\custom_nodes\ComfyUI-Easy-Use2025-04-07T19:49:59.422295 -
2025-04-07T19:49:59.422295 - 0.0 seconds:2025-04-07T19:49:59.422295 - 2025-04-07T19:49:59.422295 - E:\zsy\ComfyUI\custom_nodes\ComfyUI-Marigold2025-04-07T19:49:59.422295 -
2025-04-07T19:49:59.422295 - 20.7 seconds:2025-04-07T19:49:59.422295 - 2025-04-07T19:49:59.422295 - E:\zsy\ComfyUI\custom_nodes\ComfyUI-Manager2025-04-07T19:49:59.422295 -
2025-04-07T19:49:59.422295 -
2025-04-07T19:50:01.957677 - Total VRAM 24564 MB, total RAM 65175 MB
2025-04-07T19:50:01.957677 - pytorch version: 2.5.1+cu124
2025-04-07T19:50:03.707901 - xformers version: 0.0.28.post3
2025-04-07T19:50:03.708898 - Set vram state to: NORMAL_VRAM
2025-04-07T19:50:03.708898 - Device: cuda:0 NVIDIA RTX A5000 : cudaMallocAsync
2025-04-07T19:50:03.940898 - Using xformers cross attention
2025-04-07T19:50:05.891233 - [Prompt Server] web root: E:\zsy\ComfyUI\web
2025-04-07T19:50:06.737233 - [AnimateDiffEvo] - �[0;31mERROR�[0m - No motion models found. Please download one and place in: ['E:\\zsy\\ComfyUI\\custom_nodes\\ComfyUI-AnimateDiff-Evolved\\models', 'E:\\zsy\\ComfyUI\\models\\animatediff_models']
2025-04-07T19:50:06.772233 - [Crystools �[0;32mINFO�[0m] Crystools version: 1.21.0
2025-04-07T19:50:06.797234 - [Crystools �[0;32mINFO�[0m] CPU: Intel(R) Xeon(R) w7-3465X - Arch: AMD64 - OS: Windows 10
2025-04-07T19:50:06.809233 - [Crystools �[0;32mINFO�[0m] Pynvml (Nvidia) initialized.
2025-04-07T19:50:06.809233 - [Crystools �[0;32mINFO�[0m] GPU/s:
2025-04-07T19:50:06.829233 - [Crystools �[0;32mINFO�[0m] 0) NVIDIA RTX A5000
2025-04-07T19:50:06.845233 - [Crystools �[0;32mINFO�[0m] 1) NVIDIA RTX A5000
2025-04-07T19:50:06.845233 - [Crystools �[0;32mINFO�[0m] NVIDIA Driver: 537.13
2025-04-07T19:50:07.728812 - �[34m[ComfyUI-Easy-Use] server: �[0mv1.2.8 �[92mLoaded�[0m2025-04-07T19:50:07.728812 -
2025-04-07T19:50:07.728812 - �[34m[ComfyUI-Easy-Use] web root: �[0mE:\zsy\ComfyUI\custom_nodes\ComfyUI-Easy-Use\web_version/v2 �[92mLoaded�[0m2025-04-07T19:50:07.728812 -
2025-04-07T19:50:07.741813 - [Moondream] trying to update model versions...2025-04-07T19:50:07.741813 - 2025-04-07T19:50:49.830787 - failed (HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /vikhyatk/moondream2/raw/main/versions.txt (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x000002018DE19750>, 'Connection to huggingface.co timed out. (connect timeout=None)')))2025-04-07T19:50:49.830787 -
2025-04-07T19:50:49.831787 - [Moondream] found model versions: 2024-03-04, 2024-03-06, 2024-03-13, 2024-04-02, 2024-05-08, 2024-05-20, 2024-07-23 (Thundermoon), 2024-08-26, 2025-01-092025-04-07T19:50:49.831787 -
2025-04-07T19:50:49.836787 - ### Loading: ComfyUI-Impact-Pack (V7.11.5)2025-04-07T19:50:49.836787 -
2025-04-07T19:50:50.836787 - ### Loading: ComfyUI-Impact-Pack (Subpack: V0.6)2025-04-07T19:50:50.836787 -
2025-04-07T19:50:50.836787 - [WARN] ComfyUI-Impact-Pack: custom_wildcards path not found: C:\aki\AAPACKING\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Impact-Pack\modules\impact\..\..\custom_wildcards. Using default path.2025-04-07T19:50:50.836787 -
2025-04-07T19:50:50.871787 - [Impact Pack] Wildcards loading done.2025-04-07T19:50:50.871787 -
2025-04-07T19:50:50.886787 - ### Loading: ComfyUI-Inspire-Pack (V1.7)2025-04-07T19:50:50.886787 -
2025-04-07T19:50:50.966787 - ### Loading: ComfyUI-Inspire-Pack (V1.16)
2025-04-07T19:50:57.736575 - Total VRAM 24564 MB, total RAM 65175 MB
2025-04-07T19:50:57.736575 - pytorch version: 2.5.1+cu124
2025-04-07T19:50:57.736575 - xformers version: 0.0.28.post3
2025-04-07T19:50:57.736575 - Set vram state to: NORMAL_VRAM
2025-04-07T19:50:57.736575 - Device: cuda:0 NVIDIA RTX A5000 : cudaMallocAsync
2025-04-07T19:50:57.784575 - ### Loading: ComfyUI-Manager (V2.52)2025-04-07T19:50:57.784575 -
2025-04-07T19:50:57.937575 - ### ComfyUI Revision: 2854 [6e8cdcd3] | Released on '2024-11-22'2025-04-07T19:50:57.937575 -
2025-04-07T19:50:58.303434 - (pysssss:WD14Tagger) [DEBUG] Available ORT providers: TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider2025-04-07T19:50:58.303434 -
2025-04-07T19:50:58.303434 - (pysssss:WD14Tagger) [DEBUG] Using ORT providers: CUDAExecutionProvider, CPUExecutionProvider2025-04-07T19:50:58.303434 -
2025-04-07T19:50:58.346433 - ------------------------------------------2025-04-07T19:50:58.346433 -
2025-04-07T19:50:58.346433 - �[34mComfyroll Studio v1.76 : �[92m 175 Nodes Loaded�[0m2025-04-07T19:50:58.346433 -
2025-04-07T19:50:58.346433 - ------------------------------------------2025-04-07T19:50:58.346433 -
2025-04-07T19:50:58.346433 - ** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md2025-04-07T19:50:58.346433 -
2025-04-07T19:50:58.346433 - ** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki2025-04-07T19:50:58.346433 -
2025-04-07T19:50:58.346433 - ------------------------------------------2025-04-07T19:50:58.346433 -
2025-04-07T19:50:58.358433 - �[36;20m[comfyui_controlnet_aux] | INFO -> Using ckpts path: E:\zsy\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts�[0m
2025-04-07T19:50:58.359433 - �[36;20m[comfyui_controlnet_aux] | INFO -> Using symlinks: False�[0m
2025-04-07T19:50:58.359433 - �[36;20m[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']�[0m
2025-04-07T19:50:58.385433 - DWPose: Onnxruntime with acceleration providers detected2025-04-07T19:50:58.385433 -
2025-04-07T19:50:58.418433 - �[1;35m### [START] ComfyUI AlekPet Nodes �[1;34mv1.0.35�[0m�[1;35m ###�[0m2025-04-07T19:50:58.418433 -
2025-04-07T19:51:01.027342 - �[92mNode -> ChatGLMNode: �[93mChatGLM4TranslateCLIPTextEncodeNode, ChatGLM4TranslateTextNode�[0m �[92m[Loading] �[0m2025-04-07T19:51:01.028343 -
2025-04-07T19:51:01.033341 - �[92mNode -> ArgosTranslateNode: �[93mArgosTranslateCLIPTextEncodeNode, ArgosTranslateTextNode�[0m �[92m[Loading] �[0m2025-04-07T19:51:01.033341 -
2025-04-07T19:51:01.044344 - �[92mNode -> DeepTranslatorNode: �[93mDeepTranslatorCLIPTextEncodeNode, DeepTranslatorTextNode�[0m �[92m[Loading] �[0m2025-04-07T19:51:01.044344 -
2025-04-07T19:51:01.044344 - �[92mNode -> GoogleTranslateNode: �[93mGoogleTranslateCLIPTextEncodeNode, GoogleTranslateTextNode�[0m �[92m[Loading] �[0m2025-04-07T19:51:01.044344 -
2025-04-07T19:51:01.054340 - �[92mNode -> ExtrasNode: �[93mPreviewTextNode, HexToHueNode, ColorsCorrectNode�[0m �[92m[Loading] �[0m2025-04-07T19:51:01.054340 -
2025-04-07T19:51:01.055342 - �[92mNode -> PoseNode: �[93mPoseNode�[0m �[92m[Loading] �[0m2025-04-07T19:51:01.055342 -
2025-04-07T19:51:01.118341 - �[92mNode -> IDENode: �[93mIDENode�[0m �[92m[Loading] �[0m2025-04-07T19:51:01.118341 -
2025-04-07T19:51:01.312947 - �[92mNode -> PainterNode: �[93mPainterNode�[0m �[92m[Loading] �[0m2025-04-07T19:51:01.312947 -
2025-04-07T19:51:01.313944 - �[1;35m### [END] ComfyUI AlekPet Nodes ###�[0m2025-04-07T19:51:01.313944 -
2025-04-07T19:51:01.838948 - �[34mFizzleDorf Custom Nodes: �[92mLoaded�[0m2025-04-07T19:51:01.838948 -
2025-04-07T19:51:02.081898 -
�[36mEfficiency Nodes:�[0m Attempting to add Control Net options to the 'HiRes-Fix Script' Node (comfyui_controlnet_aux add-on)...�[92mSuccess!�[0m2025-04-07T19:51:02.081898 -
2025-04-07T19:51:02.087898 - Patching UNetModel.forward2025-04-07T19:51:02.087898 -
2025-04-07T19:51:02.087898 - UNetModel.forward has been successfully patched.2025-04-07T19:51:02.087898 -
2025-04-07T19:51:02.102898 - �[1;32m[Power Noise Suite]: 🦚🦚🦚 �[93m�[3mkweh..�[0m 🦚🦚🦚2025-04-07T19:51:02.102898 -
2025-04-07T19:51:02.102898 - �[1;32m[Power Noise Suite]:�[0m Tamed �[93m11�[0m wild nodes.2025-04-07T19:51:02.102898 -
2025-04-07T19:51:02.125898 -
2025-04-07T19:51:02.126898 - �[92m[rgthree-comfy] Loaded 42 exciting nodes. 🎉�[00m2025-04-07T19:51:02.126898 -
2025-04-07T19:51:02.126898 -
2025-04-07T19:51:02.130898 -
Import times for custom nodes:
2025-04-07T19:51:02.131898 - 0.0 seconds: E:\zsy\ComfyUI\custom_nodes\zsy_SelectPromptFromIndex.py
2025-04-07T19:51:02.131898 - 0.0 seconds: E:\zsy\ComfyUI\custom_nodes\websocket_image_save.py
2025-04-07T19:51:02.131898 - 0.0 seconds: E:\zsy\ComfyUI\custom_nodes\AIGODLIKE-ComfyUI-Translation
2025-04-07T19:51:02.131898 - 0.0 seconds: E:\zsy\ComfyUI\custom_nodes\ComfyUI_AdvancedRefluxControl
2025-04-07T19:51:02.131898 - 0.0 seconds: E:\zsy\ComfyUI\custom_nodes\ControlNet-LLLite-ComfyUI
2025-04-07T19:51:02.131898 - 0.0 seconds: E:\zsy\ComfyUI\custom_nodes\FreeU_Advanced
2025-04-07T19:51:02.131898 - 0.0 seconds: E:\zsy\ComfyUI\custom_nodes\cg-use-everywhere-main
2025-04-07T19:51:02.131898 - 0.0 seconds: E:\zsy\ComfyUI\custom_nodes\ComfyUI_TiledKSampler
2025-04-07T19:51:02.131898 - 0.0 seconds: E:\zsy\ComfyUI\custom_nodes\stability-ComfyUI-nodes
2025-04-07T19:51:02.131898 - 0.0 seconds: E:\zsy\ComfyUI\custom_nodes\ComfyUI-WD14-Tagger
2025-04-07T19:51:02.131898 - 0.0 seconds: E:\zsy\ComfyUI\custom_nodes\ComfyUI_experiments
2025-04-07T19:51:02.131898 - 0.0 seconds: E:\zsy\ComfyUI\custom_nodes\PowerNoiseSuite
2025-04-07T19:51:02.132898 - 0.0 seconds: E:\zsy\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus
2025-04-07T19:51:02.132898 - 0.0 seconds: E:\zsy\ComfyUI\custom_nodes\ComfyUI-moondream-main
2025-04-07T19:51:02.132898 - 0.0 seconds: E:\zsy\ComfyUI\custom_nodes\images-grid-comfy-plugin
2025-04-07T19:51:02.132898 - 0.0 seconds: E:\zsy\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts
2025-04-07T19:51:02.132898 - 0.0 seconds: E:\zsy\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale
2025-04-07T19:51:02.132898 - 0.0 seconds: E:\zsy\ComfyUI\custom_nodes\ComfyUI_essentials
2025-04-07T19:51:02.132898 - 0.0 seconds: E:\zsy\ComfyUI\custom_nodes\Derfuu_ComfyUI_ModdedNodes
2025-04-07T19:51:02.132898 - 0.0 seconds: E:\zsy\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet
2025-04-07T19:51:02.132898 - 0.0 seconds: E:\zsy\ComfyUI\custom_nodes\rgthree-comfy
2025-04-07T19:51:02.132898 - 0.0 seconds: E:\zsy\ComfyUI\custom_nodes\ComfyUI-IPAdapter-Flux-main
2025-04-07T19:51:02.132898 - 0.0 seconds: E:\zsy\ComfyUI\custom_nodes\efficiency-nodes-comfyui
2025-04-07T19:51:02.132898 - 0.0 seconds: E:\zsy\ComfyUI\custom_nodes\ComfyUI-Marigold
2025-04-07T19:51:02.132898 - 0.0 seconds: E:\zsy\ComfyUI\custom_nodes\ComfyUI_Comfyroll_CustomNodes
2025-04-07T19:51:02.132898 - 0.1 seconds: E:\zsy\ComfyUI\custom_nodes\ComfyUI-KJNodes
2025-04-07T19:51:02.132898 - 0.1 seconds: E:\zsy\ComfyUI\custom_nodes\comfyui_controlnet_aux
2025-04-07T19:51:02.132898 - 0.1 seconds: E:\zsy\ComfyUI\custom_nodes\ComfyUI-Inspire-Pack-main
2025-04-07T19:51:02.132898 - 0.1 seconds: E:\zsy\ComfyUI\custom_nodes\ComfyUI-Inspire-Pack
2025-04-07T19:51:02.132898 - 0.1 seconds: E:\zsy\ComfyUI\custom_nodes\ComfyUI-Crystools
2025-04-07T19:51:02.133898 - 0.2 seconds: E:\zsy\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved
2025-04-07T19:51:02.133898 - 0.2 seconds: E:\zsy\ComfyUI\custom_nodes\ComfyUI_LayerStyle-main
2025-04-07T19:51:02.133898 - 0.4 seconds: E:\zsy\ComfyUI\custom_nodes\ComfyUI_FizzNodes
2025-04-07T19:51:02.133898 - 0.5 seconds: E:\zsy\ComfyUI\custom_nodes\ComfyUI-Manager
2025-04-07T19:51:02.133898 - 0.9 seconds: E:\zsy\ComfyUI\custom_nodes\ComfyUI-Easy-Use
2025-04-07T19:51:02.133898 - 1.1 seconds: E:\zsy\ComfyUI\custom_nodes\ComfyUI-Impact-Pack
2025-04-07T19:51:02.133898 - 3.0 seconds: E:\zsy\ComfyUI\custom_nodes\ComfyUI_Custom_Nodes_AlekPet
2025-04-07T19:51:02.133898 - 6.7 seconds: E:\zsy\ComfyUI\custom_nodes\ComfyUI-Inspyrenet-Rembg-main
2025-04-07T19:51:02.133898 - 42.1 seconds: E:\zsy\ComfyUI\custom_nodes\ComfyUI-Hangover-Moondream-main
2025-04-07T19:51:02.133898 -
2025-04-07T19:51:02.153898 - Starting server
2025-04-07T19:51:02.154898 - To see the GUI go to: http://0.0.0.0:8188
2025-04-07T19:51:02.154898 - To see the GUI go to: http://[::]:8188
2025-04-07T19:51:03.414897 - FETCH DATA from: E:\zsy\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json2025-04-07T19:51:03.414897 - 2025-04-07T19:51:03.420898 - [DONE]2025-04-07T19:51:03.420898 -
2025-04-07T19:51:20.357507 - Exception in thread 2025-04-07T19:51:20.357507 - Thread-9 (<lambda>)2025-04-07T19:51:20.357507 - :
2025-04-07T19:51:20.357507 - Traceback (most recent call last):
2025-04-07T19:51:20.357507 - File "E:\zsy\ComfyUI\python\lib\threading.py", line 1016, in _bootstrap_inner
2025-04-07T19:51:20.357507 - 2025-04-07T19:51:20.357507 - self.run()2025-04-07T19:51:20.357507 -
2025-04-07T19:51:20.357507 - File "<enhanced_experience vendors.sentry_sdk.integrations.threading>", line 99, in run
2025-04-07T19:51:20.359508 - File "<enhanced_experience vendors.sentry_sdk.integrations.threading>", line 94, in _run_old_run_func
2025-04-07T19:51:20.361507 - File "<enhanced_experience vendors.sentry_sdk.utils>", line 1649, in reraise
2025-04-07T19:51:20.362507 - File "<enhanced_experience vendors.sentry_sdk.integrations.threading>", line 92, in _run_old_run_func
2025-04-07T19:51:20.364507 - File "E:\zsy\ComfyUI\python\lib\threading.py", line 953, in run
2025-04-07T19:51:20.364507 - 2025-04-07T19:51:20.364507 - self._target(*self._args, **self._kwargs)2025-04-07T19:51:20.364507 -
2025-04-07T19:51:20.364507 - File "E:\zsy\ComfyUI\custom_nodes\ComfyUI-Manager\glob\manager_server.py", line 1344, in <lambda>
2025-04-07T19:51:20.365507 - 2025-04-07T19:51:20.365507 - threading.Thread(target=lambda: asyncio.run(default_cache_update())).start()2025-04-07T19:51:20.365507 -
2025-04-07T19:51:20.365507 - File "E:\zsy\ComfyUI\python\lib\asyncio\runners.py", line 44, in run
2025-04-07T19:51:20.365507 - 2025-04-07T19:51:20.365507 - return loop.run_until_complete(main)2025-04-07T19:51:20.365507 -
2025-04-07T19:51:20.365507 - File "E:\zsy\ComfyUI\python\lib\asyncio\base_events.py", line 649, in run_until_complete
2025-04-07T19:51:20.365507 - 2025-04-07T19:51:20.365507 - return future.result()2025-04-07T19:51:20.365507 -
2025-04-07T19:51:20.365507 - File "E:\zsy\ComfyUI\custom_nodes\ComfyUI-Manager\glob\manager_server.py", line 1341, in default_cache_update
2025-04-07T19:51:20.366507 - 2025-04-07T19:51:20.366507 - await asyncio.gather(a, b, c, d, e)2025-04-07T19:51:20.366507 -
2025-04-07T19:51:20.366507 - File "E:\zsy\ComfyUI\custom_nodes\ComfyUI-Manager\glob\manager_server.py", line 1328, in get_cache
2025-04-07T19:51:20.366507 - 2025-04-07T19:51:20.366507 - json_obj = await core.get_data(uri, True)2025-04-07T19:51:20.366507 -
2025-04-07T19:51:20.366507 - File "E:\zsy\ComfyUI\custom_nodes\ComfyUI-Manager\glob\manager_core.py", line 623, in get_data
2025-04-07T19:51:20.366507 - 2025-04-07T19:51:20.366507 - async with session.get(uri) as resp:2025-04-07T19:51:20.366507 -
2025-04-07T19:51:20.366507 - File "E:\zsy\ComfyUI\python\lib\site-packages\aiohttp\client.py", line 1141, in __aenter__
2025-04-07T19:51:20.366507 - 2025-04-07T19:51:20.366507 - self._resp = await self._coro2025-04-07T19:51:20.366507 -
2025-04-07T19:51:20.367507 - File "E:\zsy\ComfyUI\python\lib\site-packages\aiohttp\client.py", line 560, in _request
2025-04-07T19:51:20.367507 - 2025-04-07T19:51:20.367507 - await resp.start(conn)2025-04-07T19:51:20.367507 -
2025-04-07T19:51:20.367507 - File "E:\zsy\ComfyUI\python\lib\site-packages\aiohttp\client_reqrep.py", line 899, in start
2025-04-07T19:51:20.367507 - 2025-04-07T19:51:20.367507 - message, payload = await protocol.read() # type: ignore[union-attr]2025-04-07T19:51:20.367507 -
2025-04-07T19:51:20.367507 - File "E:\zsy\ComfyUI\python\lib\site-packages\aiohttp\streams.py", line 616, in read
2025-04-07T19:51:20.367507 - 2025-04-07T19:51:20.367507 - await self._waiter2025-04-07T19:51:20.367507 -
2025-04-07T19:51:20.367507 - aiohttp.client_exceptions2025-04-07T19:51:20.367507 - .2025-04-07T19:51:20.367507 - ClientOSError2025-04-07T19:51:20.367507 - : 2025-04-07T19:51:20.367507 - [WinError 121] 信号灯超时时间已到2025-04-07T19:51:20.367507 -
2025-04-07T19:52:13.141303 - got prompt
2025-04-07T19:52:15.855779 - PhiForCausalLM has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
- If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
- If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
- If you are not the owner of the model architecture class, please contact the model code owner to update it.
2025-04-07T19:52:16.079333 - Some weights of Moondream were not initialized from the model checkpoint at E:\zsy\ComfyUI\custom_nodes\ComfyUI-moondream-main\checkpoints/moondream2 and are newly initialized: ['text_model.lm_head.linear.bias', 'text_model.lm_head.linear.weight', 'text_model.lm_head.ln.bias', 'text_model.lm_head.ln.weight', 'text_model.transformer.embd.wte.weight', 'text_model.transformer.h.0.ln.bias', 'text_model.transformer.h.0.ln.weight', 'text_model.transformer.h.0.mixer.Wqkv.bias', 'text_model.transformer.h.0.mixer.Wqkv.weight', 'text_model.transformer.h.0.mixer.out_proj.bias', 'text_model.transformer.h.0.mixer.out_proj.weight', 'text_model.transformer.h.0.mlp.fc1.bias', 'text_model.transformer.h.0.mlp.fc1.weight', 'text_model.transformer.h.0.mlp.fc2.bias', 'text_model.transformer.h.0.mlp.fc2.weight', 'text_model.transformer.h.1.ln.bias', 'text_model.transformer.h.1.ln.weight', 'text_model.transformer.h.1.mixer.Wqkv.bias', 'text_model.transformer.h.1.mixer.Wqkv.weight', 'text_model.transformer.h.1.mixer.out_proj.bias', 'text_model.transformer.h.1.mixer.out_proj.weight', 'text_model.transformer.h.1.mlp.fc1.bias', 'text_model.transformer.h.1.mlp.fc1.weight', 'text_model.transformer.h.1.mlp.fc2.bias', 'text_model.transformer.h.1.mlp.fc2.weight', 'text_model.transformer.h.10.ln.bias', 'text_model.transformer.h.10.ln.weight', 'text_model.transformer.h.10.mixer.Wqkv.bias', 'text_model.transformer.h.10.mixer.Wqkv.weight', 'text_model.transformer.h.10.mixer.out_proj.bias', 'text_model.transformer.h.10.mixer.out_proj.weight', 'text_model.transformer.h.10.mlp.fc1.bias', 'text_model.transformer.h.10.mlp.fc1.weight', 'text_model.transformer.h.10.mlp.fc2.bias', 'text_model.transformer.h.10.mlp.fc2.weight', 'text_model.transformer.h.11.ln.bias', 'text_model.transformer.h.11.ln.weight', 'text_model.transformer.h.11.mixer.Wqkv.bias', 'text_model.transformer.h.11.mixer.Wqkv.weight', 'text_model.transformer.h.11.mixer.out_proj.bias', 'text_model.transformer.h.11.mixer.out_proj.weight', 'text_model.transformer.h.11.mlp.fc1.bias', 'text_model.transformer.h.11.mlp.fc1.weight', 'text_model.transformer.h.11.mlp.fc2.bias', 'text_model.transformer.h.11.mlp.fc2.weight', 'text_model.transformer.h.12.ln.bias', 'text_model.transformer.h.12.ln.weight', 'text_model.transformer.h.12.mixer.Wqkv.bias', 'text_model.transformer.h.12.mixer.Wqkv.weight', 'text_model.transformer.h.12.mixer.out_proj.bias', 'text_model.transformer.h.12.mixer.out_proj.weight', 'text_model.transformer.h.12.mlp.fc1.bias', 'text_model.transformer.h.12.mlp.fc1.weight', 'text_model.transformer.h.12.mlp.fc2.bias', 'text_model.transformer.h.12.mlp.fc2.weight', 'text_model.transformer.h.13.ln.bias', 'text_model.transformer.h.13.ln.weight', 'text_model.transformer.h.13.mixer.Wqkv.bias', 'text_model.transformer.h.13.mixer.Wqkv.weight', 'text_model.transformer.h.13.mixer.out_proj.bias', 'text_model.transformer.h.13.mixer.out_proj.weight', 'text_model.transformer.h.13.mlp.fc1.bias', 'text_model.transformer.h.13.mlp.fc1.weight', 'text_model.transformer.h.13.mlp.fc2.bias', 'text_model.transformer.h.13.mlp.fc2.weight', 'text_model.transformer.h.14.ln.bias', 'text_model.transformer.h.14.ln.weight', 'text_model.transformer.h.14.mixer.Wqkv.bias', 'text_model.transformer.h.14.mixer.Wqkv.weight', 'text_model.transformer.h.14.mixer.out_proj.bias', 'text_model.transformer.h.14.mixer.out_proj.weight', 'text_model.transformer.h.14.mlp.fc1.bias', 'text_model.transformer.h.14.mlp.fc1.weight', 'text_model.transformer.h.14.mlp.fc2.bias', 'text_model.transformer.h.14.mlp.fc2.weight', 'text_model.transformer.h.15.ln.bias', 'text_model.transformer.h.15.ln.weight', 'text_model.transformer.h.15.mixer.Wqkv.bias', 'text_model.transformer.h.15.mixer.Wqkv.weight', 'text_model.transformer.h.15.mixer.out_proj.bias', 'text_model.transformer.h.15.mixer.out_proj.weight', 'text_model.transformer.h.15.mlp.fc1.bias', 'text_model.transformer.h.15.mlp.fc1.weight', 'text_model.transformer.h.15.mlp.fc2.bias', 'text_model.transformer.h.15.mlp.fc2.weight', 'text_model.transformer.h.16.ln.bias', 'text_model.transformer.h.16.ln.weight', 'text_model.transformer.h.16.mixer.Wqkv.bias', 'text_model.transformer.h.16.mixer.Wqkv.weight', 'text_model.transformer.h.16.mixer.out_proj.bias', 'text_model.transformer.h.16.mixer.out_proj.weight', 'text_model.transformer.h.16.mlp.fc1.bias', 'text_model.transformer.h.16.mlp.fc1.weight', 'text_model.transformer.h.16.mlp.fc2.bias', 'text_model.transformer.h.16.mlp.fc2.weight', 'text_model.transformer.h.17.ln.bias', 'text_model.transformer.h.17.ln.weight', 'text_model.transformer.h.17.mixer.Wqkv.bias', 'text_model.transformer.h.17.mixer.Wqkv.weight', 'text_model.transformer.h.17.mixer.out_proj.bias', 'text_model.transformer.h.17.mixer.out_proj.weight', 'text_model.transformer.h.17.mlp.fc1.bias', 'text_model.transformer.h.17.mlp.fc1.weight', 'text_model.transformer.h.17.mlp.fc2.bias', 'text_model.transformer.h.17.mlp.fc2.weight', 'text_model.transformer.h.18.ln.bias', 'text_model.transformer.h.18.ln.weight', 'text_model.transformer.h.18.mixer.Wqkv.bias', 'text_model.transformer.h.18.mixer.Wqkv.weight', 'text_model.transformer.h.18.mixer.out_proj.bias', 'text_model.transformer.h.18.mixer.out_proj.weight', 'text_model.transformer.h.18.mlp.fc1.bias', 'text_model.transformer.h.18.mlp.fc1.weight', 'text_model.transformer.h.18.mlp.fc2.bias', 'text_model.transformer.h.18.mlp.fc2.weight', 'text_model.transformer.h.19.ln.bias', 'text_model.transformer.h.19.ln.weight', 'text_model.transformer.h.19.mixer.Wqkv.bias', 'text_model.transformer.h.19.mixer.Wqkv.weight', 'text_model.transformer.h.19.mixer.out_proj.bias', 'text_model.transformer.h.19.mixer.out_proj.weight', 'text_model.transformer.h.19.mlp.fc1.bias', 'text_model.transformer.h.19.mlp.fc1.weight', 'text_model.transformer.h.19.mlp.fc2.bias', 'text_model.transformer.h.19.mlp.fc2.weight', 'text_model.transformer.h.2.ln.bias', 'text_model.transformer.h.2.ln.weight', 'text_model.transformer.h.2.mixer.Wqkv.bias', 'text_model.transformer.h.2.mixer.Wqkv.weight', 'text_model.transformer.h.2.mixer.out_proj.bias', 'text_model.transformer.h.2.mixer.out_proj.weight', 'text_model.transformer.h.2.mlp.fc1.bias', 'text_model.transformer.h.2.mlp.fc1.weight', 'text_model.transformer.h.2.mlp.fc2.bias', 'text_model.transformer.h.2.mlp.fc2.weight', 'text_model.transformer.h.20.ln.bias', 'text_model.transformer.h.20.ln.weight', 'text_model.transformer.h.20.mixer.Wqkv.bias', 'text_model.transformer.h.20.mixer.Wqkv.weight', 'text_model.transformer.h.20.mixer.out_proj.bias', 'text_model.transformer.h.20.mixer.out_proj.weight', 'text_model.transformer.h.20.mlp.fc1.bias', 'text_model.transformer.h.20.mlp.fc1.weight', 'text_model.transformer.h.20.mlp.fc2.bias', 'text_model.transformer.h.20.mlp.fc2.weight', 'text_model.transformer.h.21.ln.bias', 'text_model.transformer.h.21.ln.weight', 'text_model.transformer.h.21.mixer.Wqkv.bias', 'text_model.transformer.h.21.mixer.Wqkv.weight', 'text_model.transformer.h.21.mixer.out_proj.bias', 'text_model.transformer.h.21.mixer.out_proj.weight', 'text_model.transformer.h.21.mlp.fc1.bias', 'text_model.transformer.h.21.mlp.fc1.weight', 'text_model.transformer.h.21.mlp.fc2.bias', 'text_model.transformer.h.21.mlp.fc2.weight', 'text_model.transformer.h.22.ln.bias', 'text_model.transformer.h.22.ln.weight', 'text_model.transformer.h.22.mixer.Wqkv.bias', 'text_model.transformer.h.22.mixer.Wqkv.weight', 'text_model.transformer.h.22.mixer.out_proj.bias', 'text_model.transformer.h.22.mixer.out_proj.weight', 'text_model.transformer.h.22.mlp.fc1.bias', 'text_model.transformer.h.22.mlp.fc1.weight', 'text_model.transformer.h.22.mlp.fc2.bias', 'text_model.transformer.h.22.mlp.fc2.weight', 'text_model.transformer.h.23.ln.bias', 'text_model.transformer.h.23.ln.weight', 'text_model.transformer.h.23.mixer.Wqkv.bias', 'text_model.transformer.h.23.mixer.Wqkv.weight', 'text_model.transformer.h.23.mixer.out_proj.bias', 'text_model.transformer.h.23.mixer.out_proj.weight', 'text_model.transformer.h.23.mlp.fc1.bias', 'text_model.transformer.h.23.mlp.fc1.weight', 'text_model.transformer.h.23.mlp.fc2.bias', 'text_model.transformer.h.23.mlp.fc2.weight', 'text_model.transformer.h.3.ln.bias', 'text_model.transformer.h.3.ln.weight', 'text_model.transformer.h.3.mixer.Wqkv.bias', 'text_model.transformer.h.3.mixer.Wqkv.weight', 'text_model.transformer.h.3.mixer.out_proj.bias', 'text_model.transformer.h.3.mixer.out_proj.weight', 'text_model.transformer.h.3.mlp.fc1.bias', 'text_model.transformer.h.3.mlp.fc1.weight', 'text_model.transformer.h.3.mlp.fc2.bias', 'text_model.transformer.h.3.mlp.fc2.weight', 'text_model.transformer.h.4.ln.bias', 'text_model.transformer.h.4.ln.weight', 'text_model.transformer.h.4.mixer.Wqkv.bias', 'text_model.transformer.h.4.mixer.Wqkv.weight', 'text_model.transformer.h.4.mixer.out_proj.bias', 'text_model.transformer.h.4.mixer.out_proj.weight', 'text_model.transformer.h.4.mlp.fc1.bias', 'text_model.transformer.h.4.mlp.fc1.weight', 'text_model.transformer.h.4.mlp.fc2.bias', 'text_model.transformer.h.4.mlp.fc2.weight', 'text_model.transformer.h.5.ln.bias', 'text_model.transformer.h.5.ln.weight', 'text_model.transformer.h.5.mixer.Wqkv.bias', 'text_model.transformer.h.5.mixer.Wqkv.weight', 'text_model.transformer.h.5.mixer.out_proj.bias', 'text_model.transformer.h.5.mixer.out_proj.weight', 'text_model.transformer.h.5.mlp.fc1.bias', 'text_model.transformer.h.5.mlp.fc1.weight', 'text_model.transformer.h.5.mlp.fc2.bias', 'text_model.transformer.h.5.mlp.fc2.weight', 'text_model.transformer.h.6.ln.bias', 'text_model.transformer.h.6.ln.weight', 'text_model.transformer.h.6.mixer.Wqkv.bias', 'text_model.transformer.h.6.mixer.Wqkv.weight', 'text_model.transformer.h.6.mixer.out_proj.bias', 'text_model.transformer.h.6.mixer.out_proj.weight', 'text_model.transformer.h.6.mlp.fc1.bias', 'text_model.transformer.h.6.mlp.fc1.weight', 'text_model.transformer.h.6.mlp.fc2.bias', 'text_model.transformer.h.6.mlp.fc2.weight', 'text_model.transformer.h.7.ln.bias', 'text_model.transformer.h.7.ln.weight', 'text_model.transformer.h.7.mixer.Wqkv.bias', 'text_model.transformer.h.7.mixer.Wqkv.weight', 'text_model.transformer.h.7.mixer.out_proj.bias', 'text_model.transformer.h.7.mixer.out_proj.weight', 'text_model.transformer.h.7.mlp.fc1.bias', 'text_model.transformer.h.7.mlp.fc1.weight', 'text_model.transformer.h.7.mlp.fc2.bias', 'text_model.transformer.h.7.mlp.fc2.weight', 'text_model.transformer.h.8.ln.bias', 'text_model.transformer.h.8.ln.weight', 'text_model.transformer.h.8.mixer.Wqkv.bias', 'text_model.transformer.h.8.mixer.Wqkv.weight', 'text_model.transformer.h.8.mixer.out_proj.bias', 'text_model.transformer.h.8.mixer.out_proj.weight', 'text_model.transformer.h.8.mlp.fc1.bias', 'text_model.transformer.h.8.mlp.fc1.weight', 'text_model.transformer.h.8.mlp.fc2.bias', 'text_model.transformer.h.8.mlp.fc2.weight', 'text_model.transformer.h.9.ln.bias', 'text_model.transformer.h.9.ln.weight', 'text_model.transformer.h.9.mixer.Wqkv.bias', 'text_model.transformer.h.9.mixer.Wqkv.weight', 'text_model.transformer.h.9.mixer.out_proj.bias', 'text_model.transformer.h.9.mixer.out_proj.weight', 'text_model.transformer.h.9.mlp.fc1.bias', 'text_model.transformer.h.9.mlp.fc1.weight', 'text_model.transformer.h.9.mlp.fc2.bias', 'text_model.transformer.h.9.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.0.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.0.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.0.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.0.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.0.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.0.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.0.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.0.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.0.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.0.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.0.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.0.norm2.weight', 'vision_encoder.encoder.model.visual.blocks.1.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.1.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.1.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.1.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.1.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.1.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.1.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.1.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.1.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.1.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.1.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.1.norm2.weight', 'vision_encoder.encoder.model.visual.blocks.10.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.10.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.10.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.10.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.10.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.10.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.10.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.10.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.10.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.10.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.10.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.10.norm2.weight', 'vision_encoder.encoder.model.visual.blocks.11.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.11.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.11.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.11.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.11.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.11.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.11.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.11.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.11.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.11.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.11.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.11.norm2.weight', 'vision_encoder.encoder.model.visual.blocks.12.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.12.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.12.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.12.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.12.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.12.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.12.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.12.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.12.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.12.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.12.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.12.norm2.weight', 'vision_encoder.encoder.model.visual.blocks.13.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.13.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.13.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.13.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.13.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.13.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.13.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.13.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.13.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.13.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.13.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.13.norm2.weight', 'vision_encoder.encoder.model.visual.blocks.14.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.14.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.14.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.14.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.14.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.14.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.14.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.14.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.14.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.14.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.14.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.14.norm2.weight', 'vision_encoder.encoder.model.visual.blocks.15.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.15.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.15.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.15.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.15.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.15.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.15.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.15.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.15.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.15.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.15.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.15.norm2.weight', 'vision_encoder.encoder.model.visual.blocks.16.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.16.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.16.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.16.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.16.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.16.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.16.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.16.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.16.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.16.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.16.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.16.norm2.weight', 'vision_encoder.encoder.model.visual.blocks.17.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.17.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.17.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.17.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.17.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.17.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.17.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.17.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.17.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.17.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.17.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.17.norm2.weight', 'vision_encoder.encoder.model.visual.blocks.18.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.18.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.18.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.18.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.18.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.18.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.18.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.18.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.18.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.18.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.18.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.18.norm2.weight', 'vision_encoder.encoder.model.visual.blocks.19.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.19.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.19.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.19.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.19.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.19.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.19.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.19.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.19.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.19.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.19.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.19.norm2.weight', 'vision_encoder.encoder.model.visual.blocks.2.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.2.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.2.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.2.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.2.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.2.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.2.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.2.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.2.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.2.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.2.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.2.norm2.weight', 'vision_encoder.encoder.model.visual.blocks.20.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.20.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.20.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.20.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.20.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.20.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.20.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.20.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.20.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.20.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.20.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.20.norm2.weight', 'vision_encoder.encoder.model.visual.blocks.21.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.21.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.21.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.21.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.21.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.21.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.21.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.21.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.21.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.21.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.21.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.21.norm2.weight', 'vision_encoder.encoder.model.visual.blocks.22.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.22.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.22.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.22.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.22.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.22.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.22.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.22.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.22.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.22.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.22.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.22.norm2.weight', 'vision_encoder.encoder.model.visual.blocks.23.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.23.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.23.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.23.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.23.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.23.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.23.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.23.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.23.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.23.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.23.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.23.norm2.weight', 'vision_encoder.encoder.model.visual.blocks.24.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.24.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.24.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.24.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.24.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.24.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.24.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.24.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.24.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.24.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.24.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.24.norm2.weight', 'vision_encoder.encoder.model.visual.blocks.25.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.25.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.25.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.25.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.25.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.25.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.25.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.25.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.25.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.25.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.25.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.25.norm2.weight', 'vision_encoder.encoder.model.visual.blocks.26.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.26.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.26.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.26.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.26.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.26.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.26.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.26.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.26.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.26.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.26.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.26.norm2.weight', 'vision_encoder.encoder.model.visual.blocks.3.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.3.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.3.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.3.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.3.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.3.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.3.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.3.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.3.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.3.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.3.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.3.norm2.weight', 'vision_encoder.encoder.model.visual.blocks.4.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.4.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.4.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.4.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.4.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.4.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.4.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.4.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.4.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.4.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.4.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.4.norm2.weight', 'vision_encoder.encoder.model.visual.blocks.5.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.5.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.5.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.5.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.5.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.5.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.5.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.5.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.5.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.5.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.5.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.5.norm2.weight', 'vision_encoder.encoder.model.visual.blocks.6.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.6.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.6.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.6.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.6.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.6.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.6.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.6.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.6.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.6.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.6.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.6.norm2.weight', 'vision_encoder.encoder.model.visual.blocks.7.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.7.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.7.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.7.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.7.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.7.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.7.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.7.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.7.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.7.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.7.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.7.norm2.weight', 'vision_encoder.encoder.model.visual.blocks.8.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.8.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.8.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.8.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.8.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.8.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.8.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.8.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.8.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.8.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.8.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.8.norm2.weight', 'vision_encoder.encoder.model.visual.blocks.9.attn.proj.bias', 'vision_encoder.encoder.model.visual.blocks.9.attn.proj.weight', 'vision_encoder.encoder.model.visual.blocks.9.attn.qkv.bias', 'vision_encoder.encoder.model.visual.blocks.9.attn.qkv.weight', 'vision_encoder.encoder.model.visual.blocks.9.mlp.fc1.bias', 'vision_encoder.encoder.model.visual.blocks.9.mlp.fc1.weight', 'vision_encoder.encoder.model.visual.blocks.9.mlp.fc2.bias', 'vision_encoder.encoder.model.visual.blocks.9.mlp.fc2.weight', 'vision_encoder.encoder.model.visual.blocks.9.norm1.bias', 'vision_encoder.encoder.model.visual.blocks.9.norm1.weight', 'vision_encoder.encoder.model.visual.blocks.9.norm2.bias', 'vision_encoder.encoder.model.visual.blocks.9.norm2.weight', 'vision_encoder.encoder.model.visual.norm.bias', 'vision_encoder.encoder.model.visual.norm.weight', 'vision_encoder.encoder.model.visual.patch_embed.linear.bias', 'vision_encoder.encoder.model.visual.patch_embed.linear.weight', 'vision_encoder.encoder.model.visual.pos_embed', 'vision_encoder.projection.mlp.fc1.bias', 'vision_encoder.projection.mlp.fc1.weight', 'vision_encoder.projection.mlp.fc2.bias', 'vision_encoder.projection.mlp.fc2.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
2025-04-07T19:52:18.772046 - The `seen_tokens` attribute is deprecated and will be removed in v4.41. Use the `cache_position` model input instead.
2025-04-07T19:52:18.775046 - !!! Exception during processing !!! 'DynamicCache' object has no attribute 'get_max_length'
2025-04-07T19:52:18.778046 - Traceback (most recent call last):
File "E:\zsy\ComfyUI\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "E:\zsy\ComfyUI\execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "E:\zsy\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "E:\zsy\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "E:\zsy\ComfyUI\custom_nodes\ComfyUI-moondream-main\nodes.py", line 81, in process
answer = self.moondream.answer_question(image_embeds, question, self.tokenizer, max_new_tokens)
File "E:\zsy\ComfyUI\custom_nodes\ComfyUI-moondream-main\moondream\moondream.py", line 93, in answer_question
answer = self.generate(
File "E:\zsy\ComfyUI\custom_nodes\ComfyUI-moondream-main\moondream\moondream.py", line 76, in generate
output_ids = self.text_model.generate(
File "E:\zsy\ComfyUI\python\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "E:\zsy\ComfyUI\python\lib\site-packages\transformers\generation\utils.py", line 2326, in generate
result = self._sample(
File "E:\zsy\ComfyUI\python\lib\site-packages\transformers\generation\utils.py", line 3279, in _sample
model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
File "E:\zsy\ComfyUI\custom_nodes\ComfyUI-moondream-main\moondream\modeling_phi.py", line 1158, in prepare_inputs_for_generation
max_cache_length = past_key_values.get_max_length()
AttributeError: 'DynamicCache' object has no attribute 'get_max_length'. Did you mean: 'get_seq_length'?
2025-04-07T19:52:18.779043 - Prompt executed in 5.63 seconds
## Additional Context
Metadata
Metadata
Assignees
Labels
No labels