Men between the ages of 25 to 54 have slowly been dropping out of the workforce and showing no interest in looking for employment, reports CNBC. 10.5% of US men in this age group — 6.8 mn in total —- are not only unemployed, but are not looking for employment. This is a big jump from the rate of 2.5% in 1954, as documented by the US Bureau of Labor Statistics.
(Tap or click the headline above to read this story with all of the links to external sources.)
What’s going on? A study conducted by the Pew Research Center found a link to education. Men who do not receive a college education leave the workforce at higher rates than those who do. “The big impacts are on the non-college-educated groups on their ability to enter and stay in the labor market,” said Jeff Strohl, a director of the Center on Education and the Workforce at Georgetown University. So why isn’t the rate steady? The number of men enrolling in college has been dropping over the past decade.
Higher education is becoming less and less appealing. More people are electing not to attend university to avoid student loans, which often leave Americans in debt for close to 20 years. Interest rates on these loans have risen by 44% in just five years. Gen Z undergraduates are now embracing the NEET lifestyle — Not in Employment, Education, or Training — and research shows that men are more likely than women to do so.
Experts are concerned. “The long-term decline in labor force participation by so-called prime-age men is a tremendous worry for our society, our economy, and probably our political system,” said Nicholas Eberstadt, a political economist at the American Enterprise Institute. In fact, the global employment-to-population ratio has been steadily declining since Y2K. Over the last 24 years, the employment rate has dropped from almost 5%.
OpenAI doesn’t want users poking around to see what its latest AI model is “thinking”. Since launching the new Strawberry AI model family last week, Open AI has been on high alert, sending warning emails and threatening bans to anyone who dares to look too close, Wired reports. When it comes to their shiny new o1-preview and o1-mini models, they’d prefer to keep the secrets under wraps.
What’s the big deal? Unlike previous models, this one is designed to think through problems in a step-by-step manner — like in your math exams where you have to break down how you got your answer. Cool, right? Here’s the twist: When you ask ChatGPT the question, you don’t actually get to see the breakdown — instead, OpenAI hides it, giving you a filtered version cleaned up by a second AI. This is what piqued hackers and AI enthusiasts’ interest, who are now racing to find out o1’s raw chain of thought by using jailbreaks and prompt injections — some have had luck but nothing concrete has surfaced yet.
But OpenAI is watching. If anyone dares to ask for the model’s deduction methods — even the term “reasoning trace” is flagged —, you might end up getting a stern email from the company.
Why the gatekeeping? OpenAI argues that these hidden chains of thought allow them to better monitor the model for any shady behavior — just in case AI gets too smart and starts getting manipulative. There's also the other case of not giving too much away to competitors — AI researchers love to train rival models using OpenAI’s outputs (even though it’s against their terms), and raw reasoning data would be a goldmine for anyone looking to create a knockoff version.
Remember: OpenAI was founded with the promise of being an open source model to make AI accessible to everyone. It’s even in the name. The more the revolutionary tech inflates, the more the company is keeping its code under wraps. Tech titan and chronically online internet troll Elon Musk, who co-founded the company with Sam Altman back in 2015, sued OpenAI for abandoning their original mission, claiming that they misled him and other backers. The company responded in a long blog post that contained several emails showing that not only was Musk aware of their gameplay, but that he was on board with it.
Is OpenAI being hypocritical? OpenAI has built its models by hoovering up information from across the internet, scraping user data without consent or unlawfully using copyrighted material. So while they’re locking down on their own data, one can’t help wondering: if OpenAI can source information from the internet freely, shouldn’t they be playing by their own rules?