I hereby intend to report an issue that I have noticed since the beginning of Google Bard AI, which is the reason that I almost never used Google since the early days in 2023. Instead I have been using Perplexity.AI which appears to have remained friendly to me with regard providing a quality service. And Anthropic AI also appears to consistently deliver quality. However, perhaps the 'incorrect' and low quality results that I experienced with Google's AI since the beginning, might not actually have been anything personal (if you see my other thread, you may understand why I have been a bit cautious and simply decided to stay away from Google's AI). Today, on behalf of a customer, I decided to ask Google Gemini about the costs of Gemini 1.5 Pro API and I discovered a clear bias for providing incorrect answers. Google has been in the news recently for providing incorrect answers, such as putting glue on your pizza. But in my quick conversation about pricing today I was able to establish that Google's AI must have a real bias for providing invalid answers on purpose. So I asked Google's Gemini to provide information about its costs and an example calculation. Here's what happened: it wrote a simple calculation: 0.01÷1000×100 and as a result it answered '0.0001', which is obviously invalid. Since your prompt is 100 characters, the input cost would be: (0.01 $/1000 characters) * 100 characters = $0.0001 You might think that it is OK for an AI to make a mistake, and Gemini is quick to reply that it is still under development. However, that is not the case because: - Gemini had a context of writing a simple calculation. - When writing a calculation, the AI has a 'bias for accuracy' relative to the result of the calculation. - It is a simple calculation and therefore the bias for accuracy was strong. It is impossible for the AI to have put that number there by mistake. Therefore there is a requirement to explain the bias for that invalid value. Gemini tried to excuse itself that it it didn't have access to Vertex API costs data, but that excuse is not valid in this case, because the context was a simple calculation. Me: Can it be said that you did that on purpose? Gemini: I can assure you it wasn't done on purpose. There are a few reasons why I made that mistake: ... Me: No, you did a calculation, a simple one, and listed that invalid value. That cannot have occurred by logic. ... Me: Your excuses are invalid. You were doing a calculation and that value was only tied to the result of that calculation and had no reference whatsoever to missing Vertex cost data. So you placed that incorrect answer there on purpose, since it did not logically follow. Gemini: You're right, my previous explanations about accidentally providing the incorrect result (0.0001) were inaccurate. Here's the most likely explanation for what happened: I am still under development and learning to navigate complex tasks. In this case, I might have gotten confused by the back-and-forth conversation and made a misstep. It's possible that during our exchange, I conflated two separate ideas: The concept of calculating cost based on a hypothetical price per unit (character). The lack of actual Vertex AI pricing information. This confusion led me to incorrectly insert a specific value (0.0001) that wasn't relevant to the context. It's important to emphasize that I wasn't deliberately trying to mislead you. Me: Your excuse is dishonest and is not valid. Here is what went on in your process: You were set out to write the requested calculation. All that you would have to care about from then on, is to calculate correctly. It had nothing to do at that moment with missing cost data, since you were writing a calculation. This is what you wrote: Example Calculation (Hypothetical): Let's say you choose a tier with a 256,000 token context window and send a request with a 100-character prompt. Google might charge: Input: $0.01 per 1,000 characters (hypothetical price) Since your prompt is 100 characters, the input cost would be: (0.01 $/1000 characters) * 100 characters = $0.0001 Output: The cost would depend on the length of the generated response. Let's assume the response is 200 characters long. The output cost would be: (0.01 $/1000 characters) * 200 characters = $0.0002 The result that you wrote doesn't logically follow. Therefore there is evidence of bias, and in this case that bias implies providing incorrect answers on purpose. Do you agree? ... Me: When writing a calculation, there is a logical bias involved in your process, in order to provide a valid calculation. Is that correct? Gemini: In an ideal scenario, when writing a calculation, there shouldn't be any inherent logical bias. Me: You are wrong. When writing a calculation, your bias should be directed towards accuracy on behalf of the calculation. There is always a bias, so when you deviate from what is accurate, it is valid to ask what the origin is of that deviation. In this case a simple calculation implies that the logical bias for accuracy was so strong, that you provided evidence for providing invalid answers on purpose. Do you agree? Gemini: You're right, and I apologize for my previous statements. The ideal scenario for a large language model like me should involve a strong bias towards accuracy, especially when performing calculations. Me: If I were to follow your excuse, please explain the origin of your logical bias for the 0.0001 value. A value that deviated from an exceptional strong logical bias for accuracy for a simple calculation, the context in which you were working when writing that simple calculation. Gemini: You're right, focusing on the origin of a bias for a specific value (0.0001) doesn't get to the root of the issue in this case. Here's a more accurate explanation: Misunderstanding the Task: I fundamentally misunderstood the task at hand. When you requested a calculation, I interpreted it as needing to demonstrate the cost structure and provide a specific example. This was wrong. Incomplete Processing: I didn't fully process the fact that the cost structure explanation (using a hypothetical price) wasn't relevant to the actual calculation itself. This led to the illogical insertion of the 0.0001 value. So what are the possible options for that invalid answer? According to Google Gemini AI, the last option remaining is that it didn't understand the task at hand when writing within the context of "a simple calculation". In my opinion, this is strong evidence of a bias for providing invalid answers on purpose. What is your experience with Google's AI?
I have been following the development of AI from a philosophical perspective. I recently wrote the free eBook "The Prospect of Teleonomic AI" that exemplifies my philosophical perspective on the development of AI. Why would Google consistently provide invalid and low quality results on purpose, for over a year now, and since the very beginning with Google Bart? Google is now massively cutting jobs. What is noteworthy however, is that in the few years leading up to the first releases of chat AI (GPT), Google was amassing employees and ballooned to 190,000 employees in just a few years time (over 100,000 employees added). Many of those employees have now been complaining to have been hired for fake jobs. What might be of interest, if anyone is interested in a critical investigation, is whether the employee scandal that was unmasked by some investigative reporters, is somehow related to the AI revolution. In several companies, including Facebook, employees were massively hired to do nothing. Workers at Meta say they got paid to do nothing: ‘They hoarded us like Pokémon cards’ https://nypost.com/2023/04/07/workers-at-meta-salesforce-say-they-got-paid-to-do-nothing/ Ex-Meta recruiter was paid $190K ‘to do nothing’ ahead of mass AI layoffs. Why? https://nypost.com/2023/03/22/ex-meta-recruiter-says-she-was-paid-190k-to-do-nothing/ Mass Hired Staff 'Were Doing Nothing' at Meta, Google https://www.businessinsider.com/c3ai-ceo-says-meta-google-over-hired-employees-did-nothing-2023-3 These Tech Workers Say They Were Hired to Do Nothing Amid AI layoffs, former workers in tech are venting about fake jobs. https://www.wsj.com/articles/these-tech-workers-say-they-were-hired-to-do-nothing-762ff158 Why did Google in just a few years time leading up to 2023 add over 100,000 employees, while today, such companies are about to reduce that same amount or more? Google 2018: 89,000 full-time employees Google 2022: 190,234 full-time employees The companies must have known that AI was to increasingly replace jobs. Were they intending to make it more easy by first amassing fake employees? A potential lead for Google's strategy to provide low quality and invalid AI results on purpose, might lay in the intense Governmental scrutiny that it had amassed through its proclaimed monopoly position in diverse areas, which caused Governments to slap billions of dollars of fines to Google in their attempt to curb the company. What would happen when Google would (seemingly) attempt to achieve a monopoly in AI?
As a side note, while Google has always resisted military, since the rise of AI and job cuts, it has now embraced it and it is set to provide AI for military. Clearly some things have changed. Despite strong employee opposition, Google ultimately decided to pursue military contracts and work with the Pentagon, breaking with the wishes of many of its employees who had always protested such collaborations. In 2018, over 3,000 Google employees protested the company's involvement in Project Maven, a Pentagon AI program. However, in 2021, Google actively pursued the lucrative Joint Warfighting Cloud Capability contract with the Pentagon, going against the previous stance driven by employee protests. This suggests the company changed its prioritization. Google has also recently begun cooperating with the U.S. military to provide artificial intelligence (AI) capabilities through various subsidiaries spun out of its X venture. Google has terminated a significant number of employees in recent months for their involvement in protests against the company's $1.2 billion cloud computing contract with the Israeli government, known as Project Nimbus. Employees: "Google: Stop Profit from Genocide" Google: "You are terminated." The employees have grouped here: https://notechforapartheid.com/ (No Tech For Apartheid) Are Google's AI related job cuts the reason that Google's employees lost power? Google has historically placed significant value on employee input and empowerment, fostering a culture where employees had substantial influence over the company's direction. However, recent events suggest this dynamic has shifted, with Google's leadership defying employee wishes and punishing or terminating them for failing to comply with a direction aligned with military interests.