OpenAI Sora is a text-to-video model that can create realistic and imaginative scenes from text instructions. It can generate videos up to a minute long, with a resolution of up to 1920×1080 pixels. It can also extend existing videos forwards or backwards in time. Sora is powered by a large language model that can understand natural language and generate relevant and useful responses. Sora is an example of generative AI, a type of artificial intelligence that can create new data, such as text, images, code, or other types of content, using generative models.
Sora was introduced by OpenAI in February 2024, as a research project to explore the possibilities and challenges of text-to-video generation. Sora is not publicly available yet, as OpenAI is still testing its safety and quality. OpenAI has shared some sample videos generated by Sora on its official website, as well as a technical report that explains how Sora works and what it can do.
Sora can generate videos that combine different concepts, attributes, and styles, such as “a stylish woman walks down a Tokyo street filled with warm glowing neon and animated city signage” or “a gorgeously rendered papercraft world of a coral reef, rife with colorful fish and sea creatures”. Sora can also generate videos that match the mood, tone, and style of the user’s prompt, such as “animated scene features a close-up of a short fluffy monster kneeling beside a melting red candle” or “a movie trailer featuring the adventures of the 30 year old space man wearing a red wool knitted motorcycle helmet”.
Prompt : A close up view of a glass sphere that has a zen garden within it. There is a small dwarf in the sphere who is raking the zen garden and creating patterns in the sand.