Gen Z is de-influencing themselves. TikTok’s latest trend? Underconsumption. The new movement has emerged as a response to the overload of high-end aspirational content on TikTok, says The New York Times. Tired of the endless stream of influencers flaunting luxury items and unattainable lifestyles, users are embracing a more practical, minimalist way of living, especially after the post-lockdown revenge spending craze.
The “underconsumption core” trend has caused a noticeable shift — instead of shopping hauls and unboxing videos, content creators are now promoting upcycling, and discouraging the purchase of single-use products. The primary drive behind the downswing, besides the un-relatability of influencers, is an awareness of economic hardships and one’s own financial capabilities.
Is it sustainable? Brett House, an economics professor at Columbia University, notes that “back-to-basics” trends come about every decade, especially following major economic downturns. The correct mindset, House believes, is to view the trend as one of “appropriate consumption rather than underconsumption.”
JPMorgan looks to AI to aid its research analysis. Last week, the investment company began a rollout of an in-house chatbot to take on the duties of a research analyst, reports the Financial Times. Employees at JPMorgan Chase’s wealth and management division now have access to an LLM Suite meant to offload some of their tasks. “Think of [it] as a research analyst that can offer information, solutions and advice on a topic,” said an internal memo signed by the department head Mary Erdoes.
Just a productivity tool… For now. The memo, sent to 50k employees who now have access to the tool, said that the “ChatGPT-like product” was meant to be used for “general purpose productivity,” and does not mention cost-cutting or employee replacement. However, in January, it was reported that the company quietly laid off 500 investment banking employees, a trend that has slowly continued into the year.
Can JPMorgan trust the algorithm? AI models have been caught “ hallucinating,” with ChatGPT 3.5 — which the LLM Suite is powered by — reporting incorrect information about federal court cases 69% of the time. Even Google’s attempts haven’t bypassed AI’s pink elephants. In May, Google’s AI Overview quickly started doling out nonsense, advising users to use glue to secure cheese onto their pizzas, and to consume “at least one small rock a day” for nutrition purposes.
Just yesterday, OpenAI’s demo of SearchGPT, meant to rival Google Search, malfunctioned at launch. Matteo Wong of The Atlantic reported that while searching for “music festivals in boone north carolina in august [sic],” the top suggestion contained incorrect dates. “AI can’t [even] properly copy-paste from a music festival’s website,” said Wong.