Nanogram is designed for the enthusiest who wants complete data sovereignty on their social media platform.
Spin up your own instance on termux for Android.
Demo here.
Install instructions are at the bottom of the readme.
Nanogram is designed for the enthusiest who wants complete data sovereignty on their social media platform.
Spin up your own instance on termux for Android.
Demo here.
Install instructions are at the bottom of the readme.
from a previous now-deleted post by OP in another community
Good to know. Avoiding this like the plague now.
Curious is that just out of principle basically? Because it’s actually a pretty good way to stay away from AI now that it exists. That’s why I made it.
If a single exploit was discovered in what you have here, would you know how to go in and fix it and then verify the fix yourself outside of the dubious words of an LLM?
I’m not interested in entrusting my data/software/device to your faith in some models instead of the wisdom of a human being.
This is why I would not use it.
No not without a LLM but I’m pretty sure I could patch with it.
If there is an exploit discovered it’s going to be getting past the login somehow in which case the attacker has the .onion address that was leaked from a user. I tried every possible way to penetrate the login without credentials and made it as bullet proof as I could. I also implemented a function in the manager to rotate the onion address and discard the old. This brings it back to square one of distributing the address securely.
This is totally fair and I respect your opinion I just think it’s a little naive.
Under isos 27002, 90003, 25000, and 9001, and their requirements for software pedigree and sustainability, it’s just best-practice.
Is it ironic that you’re calling best-practice “naive”?
Fair enough, in that case we think the same of each other.
True! It was.
I decided it has graduated sharing via a paste so I deleted that post and finally put it on a proper code repo.
In fact, I don’t code professionaly and have never developed anything lol! This is fun side project I made for myself.
Contradicts:
It’s great if its for yourself, or learning something new to you. Releasing it like this and telling others to install software you didn’t even write is a security nightmare and disingenuous. Nowhere in your readme or any other repo files, does it specify that YOU don’t code, and this product is all due to AI and LLMs.
I don’t see your point of how these two statements contradict at all?
If you think it’s unsafe don’t install it. I demonstrated exactly what it does and the entire source is available to pick apart if you desire. I’m not forcing anyone to do anything.
Sure, I didn’t write the code persaybut it still took me two months to make this thing. Prompt after prompt testing each iteration.
per se