Google's "What People Suggest" AI: A Risky Experiment, Thankfully
Google has discontinued its experimental "What People Suggest" AI feature, which summarized user-generated medical advice. While initially appealing for quick insights, its reliance on non-experts posed significant safety risks, making its removal a welcome development for user health and information accuracy.

Quick Verdict: A Necessary Retreat for User Safety
Google has quietly, and perhaps wisely, pulled the plug on its experimental AI feature, "What People Suggest." This tool aimed to summarize medical advice and health stories sourced directly from online users and forums, rather than accredited medical professionals. While the premise of quick, peer-driven insights might have held initial appeal, the inherent risks associated with dispensing unverified health information proved too substantial. Its discontinuation marks a critical moment for Google, signaling a much-needed prioritization of user safety over the allure of AI-driven convenience in the sensitive realm of health.
What Was "What People Suggest"? Deconstructing the Concept
"What People Suggest" emerged as an experimental AI feature designed to streamline the process of finding health-related discussions online. Imagine typing a medical query into Google and, instead of sifting through countless forum threads, being presented with an AI-generated summary of what everyday people were saying about similar health issues. The concept was straightforward: leverage AI to distill the collective experiences and tips shared by non-experts across the internet.
Key Details of the Feature:
- Source of Information: Primarily drew from user discussions on online forums and communities, not from medical professionals or verified health organizations.
- Functionality: Utilized AI to summarize and present health tips and personal stories.
- Intended Benefit: To offer quick, easy-to-read insights from individuals experiencing similar health challenges, potentially saving users the time of manual forum navigation.
On the surface, the idea held a certain practical allure. For those grappling with specific conditions, hearing directly from others who have navigated similar paths can feel validating and informative. However, this accessibility came with a fundamental flaw: the nature of its source material. Relying on anecdotes and unvetted advice for medical guidance is inherently precarious, a fact that became increasingly evident as the feature faced scrutiny.
The User Experience: Convenience at What Cost?
From a user experience perspective, "What People Suggest" presented a double-edged sword.
The Allure of Quick Answers
The initial appeal was undeniable. In an age where information is expected to be instant, the promise of an AI delivering digestible summaries of real-world health experiences was compelling. Users could theoretically bypass the often-overwhelming task of searching through vast online communities, getting a snapshot of peer advice almost instantaneously. This offered a sense of community and shared understanding, which can be psychologically beneficial for those dealing with health concerns. The idea of quickly understanding how others managed symptoms or what home remedies they tried felt intuitive and helpful.
The Perils of Unvetted Information
The reality, however, presented significant hazards. The primary drawback stemmed from the lack of professional oversight. The summaries, despite being algorithmically generated, were based on discussions among non-medical professionals. This meant that the advice could range from benign but ineffective, to actively misleading or even dangerous. Health is a complex field where even well-intentioned advice from a layperson can have severe unintended consequences. The risk wasn't just about incorrect facts; it extended to the misinterpretation or misapplication of advice due to a lack of crucial context.
The Missing Context
One of the most critical aspects of accurate health advice is its personalized nature. Factors such as age, existing medical history, allergies, current medications, and individual physiology significantly influence the appropriateness and safety of any health recommendation. An AI, even a sophisticated one, struggles to grasp these nuanced differences when summarizing generalized forum discussions. Critics rightly pointed out that even when the information wasn't completely factually incorrect, the absence of this vital context could still make it risky. Furthermore, disclaimers intended to warn users to consult professionals were not always prominently displayed, inadvertently lending an air of authority to AI-generated responses that they absolutely did not deserve.
Google's Stance and the Lingering Doubts
Google's official explanation for the removal of "What People Suggest" is framed as part of a broader initiative to "simplify the search experience." However, this justification feels notably convenient and, to many observers, unconvincing, especially given the timing and Google's recent history with AI-generated health content.
This feature's disappearance comes amidst increasing public and expert scrutiny over how AI handles medical information. Earlier investigations had already highlighted instances where Google's AI Overviews provided misleading or even risky health advice. There were documented cases where the AI lacked essential context or directly contradicted established medical recommendations. Adding a feature that actively summarized advice from non-experts only intensified these pre-existing concerns.
Notably, the removal of "What People Suggest" occurred just a few months after Google had to discontinue AI Overviews for liver test queries, following findings that they delivered incorrect medical advice. This pattern suggests that the company is under considerable pressure to address the accuracy and safety of its AI health-related features, making the "simplification" narrative appear more like a damage control measure than a simple user experience refinement.
Pros and Cons: Weighing the Theoretical vs. The Real-World Impact
Evaluating "What People Suggest" requires a candid assessment of its potential upsides against its very real dangers.
The Theoretical 'Pros'
- Convenience and Speed: The primary theoretical benefit was the promise of quick, easily digestible summaries of peer experiences, potentially saving users considerable time and effort in navigating complex forums.
- Sense of Community: For individuals feeling isolated by health issues, seeing summarized discussions from others facing similar challenges could foster a sense of connection and shared experience.
The Overwhelming 'Cons'
- Unreliable Sources: The fundamental flaw was sourcing medical advice from non-experts, leading to summaries that lacked professional vetting and credibility.
- Accuracy and Safety Risks: The potential for receiving misleading, incomplete, or outright dangerous health advice was significant, endangering users' well-being.
- Lack of Personalization/Context: AI summaries could not account for individual health profiles, making generalized advice potentially harmful or inappropriate.
- Misleading Authority: Insufficiently prominent disclaimers could lead users to perceive AI-generated summaries as authoritative medical guidance, despite their unverified nature.
- Erosion of Trust: The feature, by its very design, risked undermining public trust in Google's ability to provide reliable and safe health information through its AI platforms.
- Ethical Concerns: Raising significant ethical questions about a technology company's responsibility when dealing with sensitive health data and advice.
The Broader Implications for AI in Health
The demise of "What People Suggest" serves as a crucial case study in the evolving landscape of AI in healthcare. It underscores the profound responsibility that technology companies bear when deploying AI systems in domains as critical as human health. While AI holds immense promise for assisting medical professionals and enhancing research, its application in direct patient-facing advice, particularly when aggregating unverified information, remains fraught with challenges.
This incident highlights the need for rigorous ethical guidelines, robust validation processes, and an unwavering commitment to safety before any AI-driven health feature is widely deployed. It also reinforces the distinction between informational assistance and definitive medical advice, a boundary that AI tools must respect without exception.
Our Recommendation: Prioritizing Verified Health Information
The removal of "What People Suggest" is undeniably for the best. When it comes to your health, accuracy and reliability are paramount. While the feature offered theoretical convenience, the potential for harm far outweighed any perceived benefit.
For consumers seeking health information, our recommendation is clear:
- Consult Medical Professionals: Always seek advice, diagnosis, and treatment from qualified healthcare providers. This is the gold standard for reliable health information.
- Exercise Caution with Online Advice: If you choose to seek personal stories or experiences in online forums or communities, do so with extreme caution and a critical mindset. Understand that these are anecdotes, not professional medical recommendations.
- Prioritize Verified Sources: When using search engines or online resources for health information, prioritize reputable medical institutions, government health organizations, and peer-reviewed journals. Look for sources that clearly cite their data and have medical professionals overseeing their content.
- Take Your Time: Rushing to adopt quick, unverified advice, especially concerning health, can be detrimental. Patience and thoroughness in seeking reliable information are far more valuable.
In essence, Google's decision is a stark reminder that while technology can enhance our lives, it must do so responsibly, especially when health is on the line. The onus remains on individuals to be discerning consumers of information, prioritizing verified expertise over the allure of instant, crowd-sourced answers.
FAQ
Q: What was Google's "What People Suggest" feature?
A: Google's "What People Suggest" was an experimental AI-powered search tool that summarized health tips and personal stories gathered from everyday internet users across various online forums and communities, rather than from medical professionals.
Q: Why did Google discontinue "What People Suggest"?
A: While Google publicly stated the removal was part of an effort to simplify the search experience, the timing and context strongly suggest it was due to mounting pressure and concerns over the accuracy and safety of AI-generated medical information, particularly its potential to provide misleading or risky health advice from unverified sources.
Q: Where can I find reliable health information now that this feature is gone?
A: For accurate and safe health information, you should always consult a medical professional. If seeking information online, prioritize reputable sources such as medical institutions, government health organizations, and established health websites that are reviewed by experts. You can still find personal stories in online forums, but treat them as anecdotal experiences, not medical advice, and always cross-reference with professional guidance.
Related articles
Intel & SambaNova AI Platform: Ambitious Heterogeneous Approach
Intel and SambaNova's new heterogeneous AI inference platform combines GPUs/AI accelerators, SambaNova RDUs, and Intel Xeon 6 processors. Targeting a broad range of agentic workloads for H2 2026, it promises easy data center integration and competitive performance, aiming to challenge market leaders.
Pebblebee Halo: More Than Just a Tracker
Quick Verdict The Pebblebee Halo isn't just another tracker tag; it's a versatile personal safety device cleverly integrated with item-finding capabilities. Boasting an ear-splitting 130dB siren, a bright 150-lumen
Amazon Kindle Sunset: A Reader's Rebellion
Amazon is discontinuing support for Kindles from 2012 and earlier, preventing on-device purchases of new books. Users are frustrated but many are embracing sideloading to extend their e-readers' lives.
OnePlus Nord 6: The Battery King Has Arrived
OnePlus Nord 6: The Battery King Has Arrived Verdict: The OnePlus Nord 6, with its revolutionary 9,000mAh battery, fundamentally redefines smartphone endurance and user freedom. While slightly heavier, its multi-day
Exit 8 Review: A Masterful Cinematic Nightmare
Exit 8 offers a chilling, psychological horror experience, transforming a minimalist video game into a profound cinematic nightmare. Director Genki Kawamura's innovative practical filmmaking and deep thematic exploration make it a must-see for fans of unconventional horror.
Apple & Lenovo Laptops: Repairability Failing Grade
Apple and Lenovo received C-minus grades for laptop repairability in a new PIRG report, ranking them among the least repairable. Key issues include difficult disassembly, lack of transparency (Lenovo), and association with anti-right-to-repair lobbying groups.






