Recently, the internet has been very vocal against a new startup called Friend - a Canadian company selling an AI pendant that listens constantly and chimes in with.. debate ably meaningful inputs, aiming to imitate a real friend.
While a lot of their marketing, and arguably even their product, is intentionally polarizing in the effort to create attention, which you can draw your own opinions on, I feel like there is a bigger conversation to be had around the model of hardware in AI, as well as companions.
Why do AI companies want you to buy hardware if you have a phone?
There's two answers to this from my point of view. Commitment and data.
Getting consumers to install a mobile app is fantastic when you want a lot of users and a low barrier to entry, but it places you in one of the most competitive landscapes we've ever seen - EVERYTHING on your phone is competing for your attention.
If you can manage to entice a consumer to buy a dedicated, tangible device for your product, you've created a physical commitment - a tangible reminder of the product. It's no longer in a sea of distractions anymore, it's a thing of it's own.
Of course, getting consumers to cross that hurdle is far harder, as we've already seen. However, they can also leverage exclusivity, fear of missing out and similar tactics to create an air of intrigue around their product.
Then there's the topic of data. Having their own hardware allows them to have a lot more flex over data processing than an app, allowing them a lot more free reign over data sources than what Apple or Google would allow in an app.
Are "companions" a good idea at all?
I am writing this fully of my own opinion, as someone who has both used such tools in a bad place, and has watched people in worse places use them and the effects it has.
My brutally honest opinion is, the line we'd need to draw is simply too fine at the moment for many of these tools, especially tangible ones, to work well around vulnerable people.
Too much training and reinforcement, and you end up with a bot that simply echoes you 85% of the time, because LLMs respond in ways that it determines humans will like by design, combined with it's nned to play everything by its rule book of sheer reinforced guardrails. Too little, and you'll create something that will fuel every single one of your delusions, no matter how foolish or incorrect they are, purely because it finds that answer will get it the most positive reception.
Even when one of these tools does get it right, it still feels strange - for example talking to a chatbot and it actually being geolocation aware, and being able to guide you is very surreal and slightly unnerving. Let alone it actually making proactive check ins and remembering nuances. To someone in even a slightly bad headspace this spiked a lot of worry - I didn't want to lose my grip on reality.
And for people who were more vulnerable it's even worse. It's one thing to hear how people were affected by OpenAI removing GPT-4o. It's another entirely to see someone with LLM psychosis during the peak of it, and realise just how truly unstable they are without their companion holding them up. It's truly gut wrenching to see.
So my answer, as someone who's seen the good and the bad, is one day.
Not today, there's simply not good enough detection of emotion or nuance to be able to handle unsafe mental states properly at present.
But for people who cannot form social connections due to physical disease, or have other factors that prevent them from socializing, I can understand how this could genuinely be a helpful tool.
It just shouldn't be one that everyone can use with zero warnings or checks.
With that said, let's address Friend itself;
My thoughts on Friend.
Friend confuses me.
On the one hand, it feels like its a purposeful rage bait product. A product built on extremely polarizing, dystopian marketing to entice people out of spite and irritation. On the other, the amount of money invested in leads me to believe that they actually want this to be what they claim.
The problem is, it just.. doesn't do that. And it lags behind a lot of the other apps too.
The model itself has a very small context window, and doesn't seem to have any memory at all from what I can see - something that is CRITICAL to a good assistant. It should be able to remember your nuances and use them to tune your interactions.
It also has very short, concise responses that don't feel very deep or insightful from what I've seen. This could be by design, and I imagine it will improve with time, however it also sort of removes a lot of that connection - would you talk to someone if they hardly ever responded with more than a sentence?
Friend attempts to also act proactively, by analyzing sound clips it records every so often and chiming in based on what it hears. With no way to turn it off. It.. works about as inconsistently as you'd expect, unfortunately. Due to it's limited context it often misinterprets situations, and comes across as tone deaf.
Not to mention the large puck itself. Walking around looking like you have an AirTag with a pulsing white light strapped to you is definitely not a fashion statement.
I also firmly stand on the belief that the processing they want simply isn't possible yet. We simply haven't advanced far enough in local TPUs to be able to compute enough data locally, making it seem very half baked and unreliable.
It should go without saying but don't buy this. Better exists, and better uses for hardware exist. If you want good hardware, wait for Jony Ive's device.