Skip links

HuggingGPT: A New Way to Solve Complex AI Tasks with Language


As artificial intelligence (AI) continues to advance, there are still many complex tasks that existing models struggle to handle. These tasks often require integrating different domains and modalities, such as language, vision, and speech, which are typically handled by separate models. However, a potential solution lies in using large language models (LLMs) as controllers to orchestrate various AI models and solve these complex tasks. In this article, we introduce HuggingGPT, a framework that leverages LLMs, specifically ChatGPT, to connect different AI models in the machine learning community, particularly Hugging Face, to tackle challenging AI tasks.

The Concept of HuggingGPT

HuggingGPT is inspired by the paper “HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace” by Yongliang Shen et al., which was submitted to arXiv on March 30, 2023. The framework operates by utilizing ChatGPT to conduct task planning, decompose user requests into subtasks, select appropriate models from Hugging Face based on their function descriptions, execute each subtask with the selected models, and finally summarize the responses for the user.

The HuggingGPT framework follows a systematic process that involves task planning, model selection, task execution, and response generation. It leverages ChatGPT’s language understanding and generation capabilities to analyze user requests, decompose them into subtasks, select appropriate models from Hugging Face, execute the subtasks, and summarize the results for the user. This pipeline allows HuggingGPT to handle a wide range of AI tasks, from generating poems and translating languages to classifying images and converting speech to text.

HuggingGPT not only provides a powerful solution for solving AI tasks but also serves as a means to explore the potential of LLMs and language as a universal interface for AI. The integration of HuggingGPT with the Hugging Face library, which hosts hundreds of AI models, enables the development of growable and scalable AI capabilities. This approach showcases the benefits of intermodel cooperation protocols, where LLMs act as the brains for planning and decision-making, while smaller expert models serve as executors for specific tasks, facilitating the design of general AI models.

Solving AI Tasks with HuggingGPT

HuggingGPT offers a wide range of possibilities for solving complex AI tasks using natural language commands. With HuggingGPT, users can generate a poem about love, translate sentences from English to French, create word clouds from text, classify images into different categories, convert speech to text, and much more. By leveraging the language capabilities of ChatGPT and the variety of AI models in Hugging Face, HuggingGPT represents a novel approach to advanced artificial intelligence.

The Four Stages of HuggingGPT

The HuggingGPT framework can be divided into four stages: task planning, model selection, task execution, and response generation. During task planning, ChatGPT analyzes user requests, understands their intention, and breaks them down into solvable subtasks using prompts. In the model selection stage, ChatGPT chooses expert models from Hugging Face based on their descriptions to solve the planned tasks. Task execution involves invoking and executing each selected model, with the results returned to ChatGPT. Finally, ChatGPT integrates the predictions from all models and generates answers for the user during the response generation stage.

Overview of HuggingGPT. With an LLM (e.g., ChatGPT) as the core controller and the expert models as the executors, the workflow of HuggingGPT consists of four stages: 1) Task planning: LLM parses user requests into a task list and determines the execution order and resource dependencies among tasks; 2) Model selection: LLM assigns appropriate models to tasks based on the description of expert models on Hugging Face; 3) Task execution: Expert models on hybrid endpoints execute the assigned tasks based on task order and dependencies; 4) Response generation: LLM integrates the inference results of experts and generates a summary of workflow logs to respond to the user.

Capabilities and Contributions of HuggingGPT

HuggingGPT’s design enables it to use external models and integrate multimodal perceptual capabilities, allowing it to handle a wide range of complex AI tasks. With the ability to connect LLMs like ChatGPT with the models in Hugging Face, HuggingGPT showcases growable and scalable AI capabilities. Experimental results demonstrate the framework’s proficiency in processing multimodal information and solving complicated AI tasks across language, vision, speech, and cross-modality.

Integration of HuggingGPT in Microsoft’s JARVIS

Microsoft’s JARVIS (Joint AI Research with Various Integrated Systems) is a conversational AI platform that utilizes HuggingGPT to create natural and engaging interactions with users in different domains and scenarios. JARVIS combines HuggingGPT’s natural language response generation with other NLP components and technologies from Microsoft, including speech recognition, speech synthesis, dialogue management, knowledge graph, and sentiment analysis. JARVIS is designed to be scalable, robust, and adaptable, and will be integrated into various Microsoft products and services, such as Bing, Cortana, Teams, and Dynamics 365.

Microsoft’s JARVIS project exemplifies the application of HuggingGPT in creating a conversational AI platform that aims to provide natural and engaging interactions with users across various domains and scenarios. JARVIS utilizes HuggingGPT to generate diverse and contextually relevant responses, ensuring a seamless conversational experience. By integrating other NLP components and technologies, such as speech recognition, synthesis, and dialogue management, JARVIS demonstrates the scalability, robustness, and adaptability of HuggingGPT in real-world applications.

Easy-to-Use and Interoperable Framework

HuggingGPT framework’s user-friendly design and interoperability with popular NLP libraries and tools make it accessible and adaptable for various applications. Whether it’s fine-tuning models, developing custom pipelines, or leveraging pre-built functionalities, HuggingGPT provides a convenient and efficient solution for harnessing the power of large language models in NLP tasks.

HuggingGPT, developed by Hugging Face, is a versatile natural language processing (NLP) framework based on the GPT family of models. It allows users to create and fine-tune large-scale language models for various NLP tasks. With compatibility with popular NLP libraries and frameworks, such as PyTorch, TensorFlow, spaCy, and FastAPI, HuggingGPT offers flexibility and interoperability, making it a user-friendly tool for NLP researchers and practitioners.


HuggingGPT, a framework that utilizes large language models (LLMs) like ChatGPT and connects them with various AI models in machine learning communities such as Hugging Face, offers a new approach to solving complex AI tasks. By leveraging the power of language as a universal interface and the vast array of AI models available, HuggingGPT enables the integration of different modalities and domains, making it possible to address sophisticated tasks with simple natural language commands.


How to Use Jarvis, Microsoft's One AI Bot to Rule Them All
HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace
GitHub - microsoft/JARVIS: JARVIS, a system to connect LLMs with ML community. Paper: