Have you tried kitty? It’s seriously nice if you can live with the occasional “oh no I sshed to a server that doesn’t have the correct terminfo files and now none of the normal terminal navigation features work”
Have you tried kitty? It’s seriously nice if you can live with the occasional “oh no I sshed to a server that doesn’t have the correct terminfo files and now none of the normal terminal navigation features work”
This doesn’t really install it, though, you can’t update or permanently edit and config, set up users, or anything like that. I would guess OP wants something more like booting the ISO in a VM, allocating a thumb drive to that VM, and then installing a full system to it with a boot loader.
If I may ask, why do we want to enable tearing now? There are pages and pages across the wikis on how to fix tearing…
This is slightly off topic but adding lanes does not alleviate traffic in the long term at all. The effect diminishes quickly and vanishes after just five years.
Come on, this one is funny but why pretend it was ever made by a right wing person in earnest? Everything about it screams classic mocking meme.
Again The issue on the repo. The developers recommend just using the app feature of the browsers to get similar functionality without the security concerns.
I honestly just did it to try to get cleaner logs having the container only be responsible for the proxying.
If you look at the repo, the very first line in the readme links to an issue that briefly explains why you should care.
Unmaintained software comes in two categories:
Nativefier falls in the second category and the second clause. Don’t use it.
I’ll try that, but since I haven’t been able to find any related issues I’m pretty sure it’s a configuration error on my part. Hehe the regretfully long post. Next step will probably be to open an issue on authentik’s GitHub but since I think it’s a pebkac I would prefer not to waste their time.
You asked for my python script but now I can’t seem to load that comment to reply directly to it. Anyway, here’s the script, I haven’t bothered to upload the repo anywhere. I’m sure it isn’t perfect but it works fine for me. The action for opening evolution when you click the tray icon is specific to hyprland so will probably need to be modified to suit your needs.
import asyncio
import concurrent.futures
import logging
import signal
import sqlite3
import sys
from pathlib import Path
from subprocess import run
import pkg_resources
from inotify_simple import INotify, flags
from PySimpleGUIQt import SystemTray
menu_def = ["BLANK", ["Exit"]]
empty_icon = pkg_resources.resource_filename(
"evolution_tray", "resources/inbox-empty.svg"
)
full_icon = pkg_resources.resource_filename(
"evolution_tray", "resources/inbox-full.svg"
)
inotify = INotify()
tray = SystemTray(filename=empty_icon, menu=menu_def, tooltip="Inbox empty")
logging.getLogger("asyncio").setLevel(logging.WARNING)
handler = logging.StreamHandler(sys.stdout)
logger = logging.getLogger()
logger.setLevel("DEBUG")
logger.addHandler(handler)
def handle_menu_events():
while True:
menu_item = tray.read()
if menu_item == "Exit":
signal.raise_signal(signal.SIGTERM)
elif menu_item == "__ACTIVATED__":
run(["hyprctl", "dispatch", "exec", "evolution"])
# tray.update(filename=paused_icon)
logger.info("Opened evolution")
def get_all_databases():
cache_path = Path.home() / ".cache" / "evolution" / "mail"
return list(cache_path.glob("**/folders.db"))
def check_unread() -> int:
unread = 0
for db in get_all_databases():
conn = sqlite3.connect(db)
cursor = conn.cursor()
try:
cursor.execute("select count(*) read from INBOX where read == 0")
unread += cursor.fetchone()[0]
except:
pass
finally:
conn.close()
if unread > 0:
tray.update(filename=full_icon, tooltip=f"{unread} unread emails")
else:
tray.update(filename=empty_icon, tooltip="Inbox empty")
return unread
def watch_inbox():
while True:
for database in get_all_databases():
inotify.add_watch(database, mask=flags.MODIFY)
while inotify.read():
logger.info("New mail")
logger.info(f"{check_unread()} new emails")
async def main():
executor = concurrent.futures.ThreadPoolExecutor(max_workers=1)
loop = asyncio.get_running_loop()
check_unread()
watch_task = asyncio.wait(
fs={
loop.run_in_executor(executor, watch_inbox),
},
return_when=asyncio.FIRST_COMPLETED,
)
await asyncio.gather(watch_task, loop.create_task(handle_menu_events()))
def entrypoint():
signal.signal(signal.SIGINT, signal.SIG_DFL)
signal.signal(signal.SIGTERM, signal.SIG_DFL)
try:
asyncio.run(main())
except Exception as e:
logger.exception(e)
if __name__ == "__main__":
entrypoint()
If you want to do this, what you probably want is to pump your logs into a log drain, something like betterstack is good. They then allow you to set up discrepancy thresholds and can send you emails when something seems to be out of the ordinary. There’s probably a self hosted thing that works the same way but I’ve never found a simple setup. You can do the whole Prometheus, influxdb, grafana setup but imo it’s too much work, and then you still have to set up email smtp separate from that.
Literally had to write my own Python applet monitoring the DB file for this. Absurd limitation.
Came to write basically this. I would try caddy but my compose file is 600 lines long now and half of that is traefik labels, I can’t be arsed with the migration.
I’m using 555 open with hyprland. No issues and I can finally suspend and resume, using the NVreg_PreserveVideoMemoryAllocations=1
module param after being unable to all year.
Imo stick to amd. I was like you, I thought the Nvidia card would be an upgrade and I thought the rumors of how bad Nvidia was had to be at least a little exaggerated, but honestly it’s a constant pita. Aside from the suspend issue I’ve had random minor system upgrades cause kernel panics and fry my boot more than once this year. That bug is still unresolved btw, their response time leaves much to be desired.
Having dockerized ollama just work is nice, but it’s not worth it, and they seem to be close to a working vulkan based runner for that anyway.
it’s also a lesser serving, so healthier.
I’m sorry but this is hilarious. You spent less money and you got less food? Fascinating 😄
I do have nightly off-site backups, that’s true. Still, having the git repo be on the same machine doesn’t seem right to me.
That would fill the same role as watchtower I guess? I’ve previously tried to have a look at having portainer manage the docker compose stack that it’s running inside but at least back then it seemed to be a dead end and not really what portainer is meant to do. I’m not interested in moving away from docker compose at this time.
I’d be a bit concerned with having the git repo also be hosted on the machine itself. If the drives break it’s all gone. I could of course have two remotes but then pushing changes still becomes a multi step procedure.
I mean, that still allows zendesk to reply with “oh yeah that’s also why we’re not paying the bounty”