OpenAI has launched GPT-4, which got support for images
The GPT-4 version is OpenAI's most advanced system, which can work with text and images, although the output still only gives text.
OpenAI's ChatGPT artificial intelligence chatbot is capable of working in conversational mode and supporting requests in natural languages. It was launched on November 30, 2022 and attracted great admiration from users for its broad capabilities, which include writing code, translation capabilities, creating texts and getting accurate answers to their questions.
Now OpenAI is introducing users to the latest version of its large language AI model, GPT-4, which surpasses ChatGPT in its advanced reasoning capabilities. Not only can it solve complex problems with greater accuracy due to its broader general knowledge and problem-solving ability, but it also demonstrates "human-level performance" in many professional tasks.
GPT-4 is much larger than previous versions, because it has been trained on more data and uses more information, which makes the neural network more expensive to run. It is also worth noting that GPT-4 is able to work not only with text, but also with images, although the output it still only gives text. The developers estimate that GPT-4 responds to requests for forbidden content 82% less often and gives actual responses 40% more often than GPT-3.5.
Also, the more creative GPT-4, can generate, edit and iterate with users on creative and technical tasks, including songwriting, scripting or learning users' writing style.