Both the rflood-vpn and transmissionvpn apps are stable (or, at the very least, they work great for me ). But this does give me an idea for a standalone vpn that you can just point your client of choice to. I prefer module over packaged.Hopefully adding a solid torrentVPN client
No, this exceeds the scope of the project and is too complicated. If you wish to do make this manually, you can fork it and code the toml file. Just being honest (and is the reason i enabled modular forking).Can we implement automation of multiple domains ? Also with different top levels for each ? Thanks
Ironically we had this to some extent but this is where you may fork and add the line about cpu resources. There is no easy around this due to tagging a couple resources number to the yml. The 1024 split is hard to calculate with tons of other containers.Add a setting to limit how much cpu resources some containers get?
I've noticed some containers will resource hog everything to the point the machine it is on will grind to a halt where I would have to restart the box or just wait it out.
Jellyfin doing library scans will grab all four of my cores and load them to 100% (might just be a bug at the moment due to it's rapid development and changes but I really like how the team over there is building out the project) so it would be nice to lock it to 3 cores on deploy.
I believe sonarr container has behaved similar to this where it will grab a bunch of resources all at once and if any other container is fighting for it than it will become really slugish and watching videos becomes a buffer fest.
This would require a rewrite. You can send what u have via pm but ansible from what I'm aware does not do swarm. I could be wrong.Would be nice to see Docker Swarm inside. has some benefit over regular docker like Selfhealing, less port problems (you don't need to expose any ports with traefik). If you wan't I have a priv bitbucket repo with examples it's mainlined for my NAS but should be easy to convert into a host setup
Check out our modular forking youtube video. You know how it ask if you want to use your fork? U would and then you would have to use GITHUB to send it back to us.
To help you along with Varken.
The reason this doesn't work is because the restore process becomes a night mare because not everyone choose a daily backup. If you ever want to recover an older backup, go-to the trashcan restore it and replace what's there. Good thought I'll keep in mind.It would be nice to have the option to store each day of backups separately so that you have a history of backups rather than the current system which overwrites the last backups.
Maybe something like /mnt/unionfs/plexguide/backups/20190424/ as the folder for today's backups for example.
Tomorrow's would be /mnt/unionfs/plexguide/backups/20190425/
You could also set it to delete any backups older than x days if you were worried about storage space (e.g. if you didn't have unlimited gdrive)
|Thread starter||Similar threads||Forum||Replies||Date|
|PG News 8.6 Changelog||Release & News Information||79|
|PG News 8.5 ChangeLog||Release & News Information||1|
|S||8.4 Changelog & Updates||Release & News Information||20|
|8.4 Changelog & Readups||Release & News Information||48|
|S||8.3 Changelog | Updates No Longer Supported||Release & News Information||35|