On Tuesday, Ant Group officially launched "Lingguang," a multi-modal AI assistant that can generate mini-applications from natural language in 30 seconds on mobile devices. These mini-applications are editable, interactive, and shareable.
Lingguang is the industry's first AI assistant to generate multi-modal content entirely through code. Its initial launch includes three main functions: "Lingguang Dialogue," "Lingguang Flash App," and "Lingguang Eyes," supporting multi-modal information output including 3D, audio/video, charts, animations, and maps, making dialogues more vivid and communication more efficient. Lingguang is currently available on both Android and Apple app stores.
"Lingguang Dialogue" breaks away from traditional text-based question-and-answer models. Instead of simply piling up text, it designs each dialogue like a curated exhibition: through structured thinking, the AI's answers are logically clear and concise; by generating visual content, such as dynamic 3D models, interactive maps, and audio/video, the content is presented more vividly; and ultimately, through high-quality information organization, users can instantly understand the knowledge.
The "Lingguang Flash App" function allows users to speak or input a sentence in a dialogue, and Lingguang can generate an AI application within one minute, or as fast as 30 seconds. Whether it's a fitness planner or a travel planner, both can generate content in a single sentence, customize parameters, and be instantly shared.
The "Insightful Eyes" feature utilizes AGI camera technology, enabling observation and understanding of the physical world through real-time video stream analysis, and supports various creation modes such as text-to-image/video and image-to-video.