Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn't arrive within 3 minutes, check your spam folder.

Ok, Thanks
Data Phoenix Digest - ISSUE 7.2024

Data Phoenix Digest - ISSUE 7.2024

Explore recordings of all Data Phoenix past webinars and AI highlights of the latest week.

Dmitry Spodarets profile image
by Dmitry Spodarets

Welcome to this week's edition of Data Phoenix Digest!

Be active in our community and join our Slack to discuss the latest news, events of our community, research papers, articles, jobs, and more...

Data Phoenix video library

Explore recordings of all our past webinars to deepen your AI knowledge and enhance your learning journey:

AI Highlights of the Past Week

This week's highlights are Microsoft's and Apple's tiny language models meant to be run locally in resource and network connection-constrained devices. Microsoft announced the Phi-3 family of models, with three models ranging from 3.8B to 14B parameters. The first official launch is Phi-3-mini, a 3.8 billion parameter model small enough to be run in contexts with constrained hardware and network resources, but performant enough to surpass Llama 3 8B's performance in the MMLU benchmark. Its strong performance results from a carefully curated dataset and training processes reminiscent of those reported by the team behind the Llama 3 model family.

Apple's models are minimal by comparison. The OpenELM family of models comprises four pre-trained models ranging from 270 million to 3 billion parameters, and their instruction-tuned variants. In addition to model weights and inference code, the release includes a complete training and evaluation framework that works with publicly available datasets. A key differentiator of the OpenELM models is the approach to non-uniform parameter allocation within each transformer model layer. The research paper reports that the 1.1 billion-parameter OpenELM variant achieves 2.36% higher accuracy than the 1.2 billion-parameter OLMo in OpenLLM Leaderboard tasks, while pre-trained with half as much data as OLMo. Although the OpenELM family may not be more than a proof-of-concept, it is still exciting news about a future of generative AI in which models can be run anywhere.

Two more models were recently launched, although with vastly different purposes: Snowflake launched Arctic, its dense mixture-of-experts LLM geared towards enterprise use cases; and Adobe released its next-generation Firefly Image 3 to advance its image-generation services. The model is available in beta, powering image generation and generative editing features on Photoshop and the Firefly web application. Adobe reports that Firefly Image 3 was designed to generate content appropriate for commercial use that does not violate copyright or other intellectual property rights. This characterizes a second common thread in the headline selection for the week: the variety of solutions geared specifically toward enterprises being released to the market. OpenAI introduced enterprise-grade features, updates, and cost-management tools; Perplexity AI launched its Enterprise Pro service; and Cohere launched a new toolkit to empower developers to build AI applications.

Relatedly, more industry leaders in very diverse markets are buying themselves into strategic partnerships with Microsoft/OpenAI, or simply going all in on work they started some time ago:

While NVIDIA expands its domains, the UK's Competition and Markets Authority (CMA) wants opinions on whether the partnerships or arrangements between Amazon and Anthropic, Microsoft and Mistral AI, and Microsoft and Inflection AI fall under the jurisdiction of UK merger laws, and on the impact they may have on the Foundation Model and AI markets.

AI-powered testing seems to be having a bit of a moment:

  • Octomind secured $4.8 million to accelerate bug-free AI-assisted software testing: Octomind recently announced it completed a successful $4.8 million seed funding round led by Cherry Ventures and backed by an outstanding selection of angel investors. The funds will enable Octomind to enhance the capabilities of its AI to automate the software testing process.
  • Mabl added mobile application testing to its solution portfolio: Mabl, the leading unified test automation platform, has announced it will launch mobile application testing capabilities to its platform. Mabl's AI-powered mobile app testing promises up to 90% improvement in test coverage and reliability while reducing test creation time to as little as 2 minutes.

More headlines of interest include using AI to develop a coffee bean blend, Sanctuary AI's next-generation Phoenix robot, and more.

Dmitry Spodarets profile image
by Dmitry Spodarets
Updated

Data Phoenix Digest

Subscribe to the weekly digest with a summary of the top research papers, articles, news, and our community events, to keep track of trends and grow in the Data & AI world!

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More