Crypto now accounts for nearly 50% of all corporate donations to political action committees, reports Axios. Crypto donations this year amounted to USD 119.2 mn — a significant rise from USD 4.6 mn in 2022 and USD 5.2 mn in 2020.

SOUND SMART- Political action committees are US organizations that collect money going towards funds, campaigns, or candidates to influence elections or legislation.

What’s the strategy? This new trend indicates that large corporations will be directly contributing to initiatives and organizations that further their best interests in the upcoming US elections.

Make the crypto corporations happy or they’ll take matters into their own hands. Fairshake, the crypto industry’s dominant political action committee, has been playing both sides of the fence, supporting or opposing candidates based on whether their policies benefit or threaten crypto. Fairshake has made enemies of US Representatives Katie Porter and Jamaal Bowen during respective states’ Democratic Senate primaries for speaking out against crypto. Both representatives lost their elections.

So who does the crypto sector favor? We’ve seen the industry display support for DonaldTrump after his keynote speech at the BTC 2024 gathering where he promised to fire the head of the US Securities and Exchange Commission if he once again takes up office at the White House.

Politicians are playing along. In July, Kamala Harris’s advisers reached out to cryptocompanies to reset the relations between them and the Democratic party, allegedly in hopes of balancing out party endorsements.


We created AI, but we have no idea how it works. One of the biggest mysteries about AI is that, despite how advanced and commonplace it has become, it is still very unpredictable. But AI researchers are working around the clock to make sense of AI’s inner workings.

GenAI models are black box machine learning models, meaning they are opaque systems whose internal workings are not easily accessible… or interpretable.

AI’s brain doesn’t work the same way ours does, meaning that we might not be able to understand it anytime soon. OpenAI noted that they were unable to interpret much of their model’s thought process because they couldn’t trace a clear or linear pattern of thought. This was made even more difficult by seemingly unrelated features being activated during certain prompts.

GenAI guidelines can be overridden. While certain guardrails are put in place to prevent AI systems from providing possibly dangerous answers, if the question is asked indirectly or differently from how the model was trained, the model can end up providing detailed answers, proving the users’ ability to overturn the system. Unfortunately, the very nature of deep learning models mean that this cannot be controlled.

But scientists are making strides in understanding how AI works. Anthropic has mapped a layer of neural networks in its Claude Sonnet model identifying different features — the artificial neurons activated when the model receives a prompt — of people or abstract concepts. They found that manipulating which features — or neural pathways — are activated can determine a model’s behavior.

OpenAI pitches in: By mapping a layer of ChatGPT-4, OpenAI found 16 mn features that access concepts people might consider while evaluating a situation.

The bigger picture: Even with these breakthroughs, we are still scratching the surface of how these models work, with both OpenAI and Anthropic acknowledging that our current understanding still has no practical application to bettering AI safety.