DeepSeek R1

What it is, and is not

News

OpenAI built an AI model exclusively for biological and longevity research. GPT-4b Micro is a specialized AI model aimed at biological research, focusing on the re-engineering of Yamanaka factors—proteins that can transform adult cells into versatile stem cells. GPT-4b Micro was trained on an extensive dataset of protein sequences, allowing it to suggest bold modifications to proteins, which would be challenging to explore through traditional laboratory methods.

Black Forest Labs has released FLUX Pro Finetuning API. This API allows you to fine tune and run the FLUX image engine on your own images.

Forbes magazine has made a list of the Top 10 AI related jobs in 2025.

OpenAI is engaged in what The White House has titled Project Stargate which is a $500 billion dollar fund for AI development. As we have already witnessed with the success fo DeepSeekR1 throwing money at the issue is not necessarily the answer.

Perplexity has launched Sonar an API for developers.

Character AI has filed a motion to dismiss one of the 2 lawsuits filed against it. In October, Megan Garcia, the mother of a 14-year-old boy who committed suicide after becoming emotionally attached to a Character chatbot named “Dany,” filed suit against the company, claiming it was responsible for the social isolation and emotional distress that preempted the boy’s death. 

Per leaked source Apple has defined 2 main priorities for their AI efforts this year. They are redesign of Siri and improving AI models.

Hugging Face has launched SmolVLM-256M and SmolVLM-500M, compact visual AI models designed for devices with less than 1GB of RAM. These models excel at complex tasks across various media types, including diagram analysis and document comprehension.

AI For Good

Revolutionizing Enzyme Development

For centuries scientists have been working to harness the power of enzymes in digestion and beyond these biological molecules speed up chemical reactions hold immense scientific promise and can even aid in sustainability efforts however the traditional process of developing new enzymes has been a time consuming and labor intensive one

A New Era in Enzyme Development Recently researchers at Stanford University made a groundbreaking discovery that's changing the game A team of bioengineers and synthetic biologists has developed a computational platform that enables them to mimic the biological experimental process digitally reducing the synthesis process from months to just days

Using machine learning models this platform allows researchers to predict highly active enzyme variants that perform just as well as their biological counterparts this means that scientists can now design and test enzymes faster more efficiently and with greater accuracy

To test the platforms capabilities researchers used it to develop a small molecule pharmaceutical as proof of concept however the team is now eager to explore a wide range of applications including the degradation of toxins from the environment

While this breakthrough has the potential to revolutionize enzyme development there is still a significant challenge data the team notes that collecting and analyzing the vast amounts of data required to train and fine tune these machine learning models is still a major hurdle

A New Frontier in Science The development of this computational platform represents a significant step forward in the field of enzyme engineering with the ability to design and test enzymes more efficiently scientists can now tackle some of the worlds most pressing environmental challenges stay tuned to see how this technology will unfold and what breakthroughs it may lead to in the future

Cudo Compute is a cloud-based service provider that offers high-performance computing, AI, and deep learning solutions.

Dubsado is great for contract writing and project management.

Folk is the number one AI powered CRM tool.

N8N is the most powerful automation tool

Prompt

A polished, metallic tetrahedron floats gracefully in a stark, empty white space, softly illuminated to create gentle, symmetrical shadows that enhance its sleek form and highlight its inherent spiritual energy. The scene is captured on the Arriflex 16SR, showcasing ultra-fine, realistic textures that accentuate the tetrahedron's surface while the minimalist backdrop evokes a sense of tranquility. This meditative setup draws viewers into a focal point of balance and harmony, with cinematic warmth and depth, inviting them to ponder the intersection of simplicity and complexity in a visually striking composition. – minimalist, serene, ultra-realistic textures, soft illumination, cinematic warmth

Non-Image Prompt

Please explain in detail how to design effective user journeys and 
personas for a product or service. Include the following elements:
1. Start by defining what user personas are and their importance in user 
journey mapping.
2. Describe how to create detailed user journeys that capture the entire 
interaction flow from start to finish, including key touchpoints and 
outcomes.
3. Discuss the integration of dynamic changes in user behavior and 
feedback loops into the design process.
4. Highlight any advanced techniques or tools that can enhance the design 
of personas and journeys, such as AI-driven personalization or machine 
learning applications.
5. Emphasize continuous improvement strategies for refining personas and 
journeys based on real-world data and user engagement metrics

Newsletters I like

What is DeepSeekR1

If you haven’t heard there is a new model on the block. It didn’t come from Stanford, UW - Seattle, or Harvard. In fact it didn’t even come from North America. It came from overseas and was built in a country which currently is under certain export bans as well. In this post I will cover what is DeepSeekR1, what is new - or not new, and does it shift the balance of AI innovation from the US to overseas.

First, What is DeepSeekR1?

DeepSeek is a side project of the quantitative trading firm baed in China called High-Flyer. It started out as a side project or research project by the company. The original name for the project was Fire-Flyer.

High-Flyer had been stockpiling gpu’s for their research projects around trading funds and currencies. The original idea was to create a company that would be called DeepSeek as a tool to build AI models based upon synthetic data.

It worked.

The primary purpose of DeepSeek was to build on a long term philosophy and so the main idea was to create a tool for research and not so much for commercialization first. The secondary purpose could be commercial, but since it is now open source you can use it basically for free.

Knowing this Cuppa.sh has already started allowing users to access DeepSeek while writing content. Cuppa allows you to access an LLM via an API key. This makes using Cuppa to write content much cheaper than using something like Chatgpt straight in the browser. AnythingLLM is an open source tool that also allows you to use an API to write, rewrite, rephrase your content. And it also works with DeepSeek API keys.

To learn more about using DeepSeek API keys try this article on the DeepSeek website.

My Experience with DeepSeek

I have only begun to use DeepSeek through an API or through the terminal at this point personally I am not impressed. The writing still has that bland AI readability that overuses words such as ‘workflow’, ‘crucial’, ‘In today’s fast paced world’ and so forth. The writing itself on top of that is not superior to what I have found with Chatgpt or Claude.

In fact I have had to take DeepSeek output and run it through Chatgpt in order to get something useful out of it. I do not find it superior to current models from Anthropic or OpenAI or even Gemini. Those are my goto models when I need/want help with my writing or reserach. Asking a question of Chatgpt almost always gives me a useful answer.

Ending Thoughts

There are other problems with DeepSeek such as privacy. Their terms of use state they have full right to any output coming from their model use. Some are starting to tell others about this too.

Also, DeepSeek is not reinventing the wheel. It was built upon data and output coming from Chatgpt. Though I think Open Source is the future of AI models this is not the model that proves that yet. As has been pointed out in this post copying an LLM is not that complicated for any with a bit of knowledge. It is merely trivial to do so.

Since this is essentially a copy, if you will, of already in use models does it end American dominance in this space? Not quite. Does the development of AI models need to be more democratized? Yes. Is Open Source the way to go? Likely, as copying a fully researched and betted AI model is not as complicated as we had thought it to be.