Google Bard Gets Better At Math Coding Sheets Export

Google Bard gets better at math coding Sheets export

Google Bard gets better at math coding Sheets export

Google has introduced two enhancements to Bard. Firstly, Bard has improved its performance in mathematical tasks, coding questions, and string manipulation. Additionally, it now offers a new export feature to Google Sheets.

At the I/O event, Google expanded Bard’s capabilities globally, including image recognition and additional coding features.

Better responses for advanced reasoning and math prompts

Google has implemented a new technique called “implicit code execution” to enhance Bard’s ability to handle computational prompts. By running code in the background, Bard can now provide more accurate responses to mathematical tasks, coding questions, and string manipulation prompts.

This improvement allows Bard to excel in answering prompts such as determining prime factors, calculating growth rates, or reversing words.

Improved logic and reasoning skills

Large language models (LLMs) have been known for their proficiency in language and creative tasks, but they often struggle with reasoning and math. To address this limitation, Google has introduced a new method that combines the power of LLMs with traditional code execution.

  • The analogy used is inspired by Daniel Kahneman’s book “Thinking, Fast and Slow,” which distinguishes between “System 1” and “System 2” thinking. System 1 represents quick and intuitive thinking, while System 2 refers to slower and deliberate thinking.
  • Traditionally, LLMs operate under System 1 thinking, generating responses quickly without deep analysis. However, this can lead to limitations in problem-solving scenarios. To overcome this, Google integrated the capabilities of both LLMs and traditional code (System 2) to improve Bard’s accuracy.

Through implicit code execution, Bard recognizes prompts that require logical code, executes it in the background, and utilizes the result to generate more precise responses. According to Google, this method has shown an approximate 30% improvement in Bard’s accuracy when responding to computation-based word and math problems in internal challenge datasets.

Announcing the new updates, Jack Krawczyk, Product Lead, Bard and Amarnag Subramanya, Vice President, Engineering, Bard, said:

Bard is getting better at doing math, writing code and changing words with a new way called guessing code and running it. It can also send the results to Google Sheets. But Bard is not perfect – sometimes it may not write code at all, or write wrong code, or not show the code it ran. Even with these mistakes, this new skill of answering with clear, smart methods is a big step to make Bard more useful.