Project #2 Announcement - Privately.to
Push the limits of private AI systems today, and build for the future.
Privacy is broken
In the last 50 days, I’ve explored basic foundations of software engineering across design, frontend, backend, AI, and hosting.
Project #1 - “BetterFriend” touched all aspects of modern software development to remind me of the birthdays of my friends and family. I even implemented Google’s Gemini AI API on the receiving end of the chatbot to accurately categorize the relationships. The best way to learn is to do. And I truly feel that I am learning faster than ever.
Along the way, I explored privacy and encryption deeply and found many constraints to protect the user's data. Even with privacy-focused applications, such as WhatsApp & Telegram, their chatbot APIs did not allow me to make these interactions fully private (end-to-end encrypted). This was a big disappointment. The biggest disappointment was facing the realities of our world today. My simple desire to keep user data private is, in fact, not the norm.
Most of the apps that use AI today don’t care about privacy. They can access some of the most private data from users without asking for their permission. We are talking about private discussions, voice messages, and any documents or interactions that we have with these apps. Just look at the most recent report on Grok allowing Google to index all “shared” conversations without any warning to the users.
"When a Grok chat is finished, the user can hit a share button to create a unique URL, allowing the conversation to be shared with others. According to Forbes, "hitting the share button means that a conversation will be published on Grok's website, without warning or a disclaimer to the user." These URLs were also made available to search engines, allowing anyone to read them.
There is no disclaimer that these chat URLs will be published for the open internet. But the Terms of Service outlined on the Grok website reads: "You grant, an irrevocable, perpetual, transferable, sublicensable, royalty-free, and worldwide right to xAI to use, copy, store, modify, distribute, reproduce, publish, display in public forums, list information regarding, make derivative works of, and aggregate your User Content and derivative works thereof for any purpose…"
Then a bombshell dropped in Sam Altman’s recent interview:
“People talk about the most personal sh** in their lives to ChatGPT,” Altman said. “People use it — young people, especially, use it — as a therapist, a life coach; having these relationship problems and [asking] ‘what should I do?’ And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. There’s doctor-patient confidentiality, there’s legal confidentiality, whatever. And we haven’t figured that out yet for when you talk to ChatGPT.”
"This could create a privacy concern for users in the case of a lawsuit, Altman added, because OpenAI would be legally required to produce those conversations today."
Today, almost no AI companies offer local or full privacy options. They claim that they will never use the data, yet they hold on to the key to see every piece of data that you send them. There’s no way to verify if they follow their terms of service.
The terms of service? Full of lawyer jargon that protects the company more than the users. These startups are less than two years old, and they all depend on this valuable data to improve their services. They may not be around in 12 months. They may be acquired by a bigger company that would use this data in the future without user permission. Every user is at risk.
Let’s use a popular note-taking app, for example:Granola. Raised $43 million dollars.
An AI notepad that records your business meetings and summarizes them for you.
Their terms of service says this:
https://www.granola.ai/docs/policies/privacy/pp
"At Granola, we take your privacy seriously. Please read this Privacy Policy to learn how we treat your personal data.
Before we get into the details, below are a few key points we’d like you to know:
We do not and do not allow third parties such as OpenAI or Anthropic to use your Personal Data to train AI models.
We only use De-Identified Data to train AI models, which you can opt-out of within your Granola account settings.
We store your Personal Data in Amazon Web Services (“AWS”) servers located in the U.S. All Personal Data is encrypted at rest and in transit using AWS’s encrypted database system. Additionally, we implement a variety of security measures including firewalls, virtual private cloud (VPC) setups, and anti-virus/malware protection to help protect your Personal Data."
Their actual service does this:
They use our "De-Identified Data" by default to train their data. OUR business & personal meeting data. Unless we opt-out.
They encrypt our data when they send to their database, and they encrypt it in their database when the data arrives, but they hold all the keys to unlock the data at anytime.
In other words:
“We train on your data by default, and you have to trust us to remove your name from your data or opt out. We can access your data at anytime. There's no way to verify what we do with it.”
We call this a honeypot. If Granola gets hacked, trillions of meetings notes and conversations could be leaked. They will also have to obey government requests to give your data out.
Aftermath
People are still probably using ChatGPT for therapy today, even after Sam's statement and warning. The benefit of having a therapist for $20/month is too good. It’s the same way that we know Instagram and Facebook use our data to make us more addicted to social media, but we still use them—the benefit of keeping up with our friends and relationships is simply too great compared to our privacy.
Social media collecting our data is bad enough. With AI, the risk is too big to ignore. If governments or companies can have open access to our thoughts and our feelings from our private meetings, conversations, and therapy sessions, the things they can do with that information are scary.
Facebook was able to influence the US election simply by knowing people’s interests, location, age, and past internet behaviors. Can you imagine if they knew how we feel every day and what we are working on at any given moment? That’s the level of access some AI apps like ChatGPT, Claude, and Grok have today.
We need an alternative. Fast.
My Mission:
Push the limits of what private AI systems can do today, and build for the future.
Project #2 — Privately.to
Version 0.1 Goal (by September 10th, 2025)
Fully private end-to-end encrypted Mac App for journaling
Word count functionality
AI for summary and idea extraction
Already using MVP daily for 30-minute journaling
Version 0.5 - Beta
Beautiful color-coded timer (non-distracting)
Customizable backgrounds and text colors
Different fonts
Writing prompts from experts/psychologists
AI-generated background music (Sono/Eleven Labs)
Version 0.8 - Beta
Voice dictation with local voice-to-text
Local voice data storage
Optional E2EE cloud storage (ProtonDrive & MCP)
How to Get There
Technical Requirements:
Independently verify Anytype's E2E encryption
Master agent coding tools (Langchain/Claude Code/Qwen/Cline/Gemini CLI)
Learn proper project management for agents (Linear or Anytype)
Foundation of Mac App development / XCode
Identify best local AI for summarization (small models)
Embed LM Studio functionality into Mac app
What I Need From You
Feedback on my Substack writing - what's useful vs. not
Share your journaling habits - do you journal daily? If not, what would make you start?
Follow the Substack to track progress and keep me accountable
The goal isn't just to build another app. It's to prove that private AI can be as powerful and accessible as today's services - without sacrificing our fundamental right to privacy.
Back to my school days i used to writing my journal in Blogger. But writing there also kinda unsafe and confusing for commoners. Seeing you build this without neglecting the users privacy will be a big big win for all of us. Keep going, make us proud!
Love this new journey and excited for what lies had! Those two projects sound perfect for you and I’m glad that you’re building out loud 👏👏