What's new
PGBlitz.com

Register Now! Find useful tips, Interact /w Community Members and join the part the Best Community on the Internet!

Multiple NVMe Drives in raid0

Chudz

Respected Member
Donor
Was asked to post this on Pgblitz.

apt-get update && sudo apt-get install mdadm --no-install-recommends

lsblk
(This will tell you the name of the NVMe drives so you can use them in the command below usually nvme0n1 etc)

mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/nvme0n1 /dev/nvme0n2
mkfs.ext4 -F /dev/md0

lsblk

(This will tell you the name of the raid0 partition should be md0 but for some reason it can change to md127 so use the command above to check.)

mount -o discard,defaults,nobarrier /dev/md0 /mnt
OR
mount -o discard,defaults,nobarrier /dev/md127 /mnt
(Depends on what it decided to name it. Seems it has a mind of its own.)

chmod 777 /mnt
 

thepj

Respected Member
Staff
@Admin9705 can you please have this added to the initial setup wiki, or a link to another wiki we can put more advanced configurations on? This is something quite a few people in Pgblitz are asking for, myself included.

Thank you so much @Chudz t6his is wonderful
 

Chudz

Respected Member
Donor
There is probably a better way to mount it rather than using that command for example a systemd service but im not 100% on how to set that up lol. Using that command above you will have to use the mount command again to re-mount it after a reboot, once you done that you have to restart unionfs.service and also your docker containers.

Restart Unionfs:
systemctl restart unionfs.service

Restart all docker containers:
docker stop $(docker ps -a -q) && docker start $(docker ps -a -q)
 

Somedude

Respected Member
Staff
NOTE: everything in this setup is not a normal PG setup, and may introduce new variables not accounted for by the script. While several users have used this setup successfully, it is a use at your own risk setup. It makes it harder to troubleshoot. If you do go forward and have any useful information, or issues feel free to ping @Somedude on Pgblitz or reply here.

Here is how I have mine setup. I will create a more in depth setup guide to be added to the wiki as soon as I get a chance. I just created a GitHub account finally.

Benefits of raid 0 NVME:
GCP has significant bandwidth. If you look at section 3 of this article HERE it mentions that each core can give you up to 2gbps connection per core, with a max of 16gbps. I have not experienced speeds of 16gbps, which may be due to usenet providers/ gdrive speed limitations based on file size, but I do constantly get speeds in the 2gbps range.

With speeds this fast, you need a server powerful enough to utilize them. The first issue is IO bottleneck by normal hard drive/ssd speeds. This was solved by GCE using a NVME disk. But it can be pushed harder. By using raid 0 you can increase the available IO significantly. The speeds seem to plateu after 4x NVME. Also when increasing your drive speeds you need to increase cpu cores so that programs like nzbget have the processing power to unpack faster. Ram is not heavily used.

ROUGH recommended specs: 2x NVME setup- 4 cores, 4GB ram such as the n1-highcpu-4 plan. 4x NVME either a custom 6 CPU 6GB ram, or a use the n1-highcpu-8 8 core, 7.2GB ram setup. While running the n1-standard-8 setup with 4x NVME I have not observed more than 75% CPU usage and have an average of 10% ram usage while downloading over 10k files with nzbget over the last 48 hours. That is roughly 6 cores, 3GB ram being utilized. Having the minimum needed cpu setup that won't cause a bottleneck will reduce your credit usage so you can run this longer. If you have any differently observed usage please let me know, so I may log it for better recommendations.

Heres the nitty gritty of the setup:
Create an instance to be used. use the recommendations above to decide on what machine type to be used. Under boot disk I recommend a SSD peristent disk with 30GB storage and the Ubuntu 18.04 LTS minimal image. The PG programs will be running off boot disk, which is why I recommend the SSD so as to not have sonarr/radarr bottlenecked. The reason only 30GB is needed is because all the downloading and processing will be done on the NVME drives, you just need enough space for the OS, programs, and the program related files/caches. As for OS I had the best experience with Ubuntu, since I ran into major issues with Debian. I will do some further testing with debian to see if I notice the same results. If you get it working with Debian successfully, please let me know.

Once you have the machine type, and boot disk setup you now need to configure the firewall and NVME drives. Under firewall makes sure you select "allow HTTP traffic" and "allow HTTPS traffic". Failing to do so may result in you not being able to access programs in the browser. Underneath firewall click "Management, security, disks, networking, sole tenancy" then select "disks" then "Add new disk". Under type select "Local SSD scratch disk" and make sure to select "NVME" not "SCSI" underneath that for the best performance. Then select how many NVME disks you want. Don't forget to click "done" on the bottom! Do note that (at least in the region I chose) NVME disks cost $36 month PER disk. so a 4x NVME setup is going to use $144/month of your credits. Once you have done all this feel free to select "create" and wait approximately 5 minutes before the instance is set up.

Once you are SSH'ed into the server here are the commands I used. Note: I am using the root account (sudo su) for setup to speed things up. I highly encourage you follow the wiki to add a sudo user, and add the sudo command as needed. If you do use the root account for setup, do note you will have to log back into root to make any changes as it will not show with your default user.

apt-get update && apt-get upgrade to make sure everything is up to date since it is a new server

apt-get install mdadm --no-install-recommends Install MDADM which is the software to setup raid. The -no-install-recommends flag is to avoid installing unneeded dependencies.

lsblk Used to view attached drives. Please note the names of the NVME drives. you should see SDA with one or more partitions such as sda1/14/15. This is your boot drive, DO NOT TOUCH IT OR USE IT IN A COMMMAND. The NVME drives will show as having 375G each and will have a label such as nvme0n1/2/3/4 or sdb, sdc, sdd, etc depending on how Google setup your server.

mdadm --create /dev/md0 --level=0 --raid-devices=4 /dev/nvme0n1 /dev/nvme0n2 /dev/nvme0n3 /dev/nvme0n4
or
mdadm --create /dev/md0 --level=0 --raid-devices=4 /dev/sdb /dev/sdc /dev/sdd /dev/sde
This command creates the actual raid setup. /dev/md0 is the name of the raid setup. You can change this to /dev/md1, /dev/md69/ /dev/mnt if you so feel the need to rename it, but just make sure /dev is in front of it. --level=0 tells what level raid to use. 0 tells it to do raid 0, if you change it to 5 it will do raid 5, etc. --raid-devices=4 tells you how many drives you want in the raid array. change it to 2 for 2 NVME drives, 3 for 3, etc. lastly the /dev/nvme0n1 section is the name of the NVME drives. How many /dev/* drives you have in this section should be the amount the raid devices = is set to, or you will run into problems. If it created successfully you should get a response such as "mdadm: array /dev/md0 started."

cat /proc/mdstat This is to verify the raid setup is active

mkfs.ext4 -F /dev/md0 This formats the raid array to an ext4 format. replace /dev/md0 with your custom name if you changed it.

mkdir -p /mnt This creates the actual /mnt directory the plexguide uses to download and process files

mount /dev/md0 /mnt This mounts the raid array to the /mnt folder. You can access /mnt like a normal folder, but instead of the fields being stored on the boot drive it will be stored on your raid array. change /dev/md0 if you have a custom name.

df -h -x devtmpfs -x tmpfs This shows you all your mounted drives. If you did it right you should something such as:
"/dev/md0 1.5T 77M 1.4T 1% /mnt" which shows your raid array, available storage, and that it is mounted on /mnt

You now have a successful raid array to use with PG! Now for a sanity check lets save the raid array so that it mounts itself even when the server is rebooted, and verify it is worked. Without doing this, if your server ever reboots it will unmount the drive, requiring it to be remounted on every restart. Failure to remount it can cause alot of issues that leads you down a rabbit hole.

mdadm --detail --scan | tee -a /etc/mdadm/mdadm.conf Used to save the raid array in mdadm


nano /etc/fstab Used to open fstab in a text editor. You can use any text editor you like, I just prefer nano. You may have to run apt-get install nano if it is missing. fstab is your filesystem table, and you want to save your raid array in it. You want to add the following to the bottom of the file:
/dev/md0 /mnt ext4 defaults 0 0

again, you want to replace /dev/md0 if you have any custom naming. If you are using nano, use CTRL + X at the same time to exit. When it asks if you want to save type "Y" then press enter, enter

Now for some checkers! I mean the type to make sure everything worked properly.

mount -av shows an overview of fstab to verify you added the array correctly. You said see /mnt already mounted .

touch /mnt test.txt This creates a blank .txt file in the /mnt directory to verify you can write to it

echo "This is a test" > /mnt/test.txt This simply adds text to the text file you created above. Change the text how you like.

cat /mnt/test.txt This reads what is in the file. You should get a response saying "This is a test"

ls -l /mnt This shows you what in the /mnt directory. You should see the test.txt

Once everything is ready to go, you can install plexguide normally. When it asks you if you want to change the processing disk, select "no". Once PG is setup, go ahead and start downloading, run netdata, and enjoy watching the madness!

Again, any suggestions to improve this guide or issues let me know. I want everything nailed out before adding this to the official wiki.
 
Last edited:

thepj

Respected Member
Staff
NOTE: everything in this setup is not a normal PG setup, and may introduce new variables not accounted for by the script. While several users have used this setup successfully, it is a use at your own risk setup. It makes it harder to troubleshoot. If you do go forward and have any useful information, or issues feel free to ping @Somedude on Pgblitz or reply here.

Here is how I have mine setup. I will create a more in depth setup guide to be added to the wiki as soon as I get a chance. I just created a GitHub account finally.

Benefits of raid 0 NVME:
GCP has significant bandwidth. If you look at section 3 of this article HERE it mentions that each core can give you up to 2gbps connection per core, with a max of 16gbps. I have not experienced speeds of 16gbps, which may be due to usenet providers/ gdrive speed limitations based on file size, but I do constantly get speeds in the 2gbps range.

With speeds this fast, you need a server powerful enough to utilize them. The first issue is IO bottleneck by normal hard drive/ssd speeds. This was solved by GCE using a NVME disk. But it can be pushed harder. By using raid 0 you can increase the available IO significantly. The speeds seem to plateu after 4x NVME. Also when increasing your drive speeds you need to increase cpu cores so that programs like nzbget have the processing power to unpack faster. Ram is not heavily used.

ROUGH recommended specs: 2x NVME setup- 4 cores, 4GB ram such as the n1-highcpu-4 plan. 4x NVME either a custom 6 CPU 6GB ram, or a use the n1-highcpu-8 8 core, 7.2GB ram setup. While running the n1-standard-8 setup with 4x NVME I have not observed more than 75% CPU usage and have an average of 10% ram usage while downloading over 10k files with nzbget over the last 48 hours. That is roughly 6 cores, 3GB ram being utilized. Having the minimum needed cpu setup that won't cause a bottleneck will reduce your credit usage so you can run this longer. If you have any differently observed usage please let me know, so I may log it for better recommendations.

Heres the nitty gritty of the setup:
Create an instance to be used. use the recommendations above to decide on what machine type to be used. Under boot disk I recommend a SSD peristent disk with 30GB storage and the Ubuntu 18.04 LTS minimal image. The PG programs will be running off boot disk, which is why I recommend the SSD so as to not have sonarr/radarr bottlenecked. The reason only 30GB is needed is because all the downloading and processing will be done on the NVME drives, you just need enough space for the OS, programs, and the program related files/caches. As for OS I had the best experience with Ubuntu, since I ran into major issues with Debian. I will do some further testing with debian to see if I notice the same results. If you get it working with Debian successfully, please let me know.

Once you have the machine type, and boot disk setup you now need to configure the firewall and NVME drives. Under firewall makes sure you select "allow HTTP traffic" and "allow HTTPS traffic". Failing to do so may result in you not being able to access programs in the browser. Underneath firewall click "Management, security, disks, networking, sole tenancy" then select "disks" then "Add new disk". Under type select "Local SSD scratch disk" and make sure to select "NVME" not "SCSI" underneath that for the best performance. Then select how many NVME disks you want. Don't forget to click "done" on the bottom! Do note that (at least in the region I chose) NVME disks cost $36 month PER disk. so a 4x NVME setup is going to use $144/month of your credits. Once you have done all this feel free to select "create" and wait approximately 5 minutes before the instance is set up.

Once you are SSH'ed into the server here are the commands I used. Note: I am using the root account (sudo su) for setup to speed things up. I highly encourage you follow the wiki to add a sudo user, and add the sudo command as needed. If you do use the root account for setup, do note you will have to log back into root to make any changes as it will not show with your default user.

apt-get update && apt-get upgrade to make sure everything is up to date since it is a new server

apt-get install mdadm --no-install-recommends Install MDADM which is the software to setup raid. The -no-install-recommends flag is to avoid installing unneeded dependencies.

lsblk Used to view attached drives. Please note the names of the NVME drives. you should see SDA with one or more partitions such as sda1/14/15. This is your boot drive, DO NOT TOUCH IT OR USE IT IN A COMMMAND. The NVME drives will show as having 375G each and will have a label such as nvme0n1/2/3/4 or sdb, sdc, sdd, etc depending on how Google setup your server.

mdadm --create /dev/md0 --level=0 --raid-devices=4 /dev/nvme0n1 /dev/nvme0n2 /dev/nvme0n3 /dev/nvme0n4
or
mdadm --create /dev/md0 --level=0 --raid-devices=4 /dev/sdb /dev/sdc /dev/sdd /dev/sde
This command creates the actual raid setup. /dev/md0 is the name of the raid setup. You can change this to /dev/md1, /dev/md69/ /dev/mnt if you so feel the need to rename it, but just make sure /dev is in front of it. --level=0 tells what level raid to use. 0 tells it to do raid 0, if you change it to 5 it will do raid 5, etc. --raid-devices=4 tells you how many drives you want in the raid array. change it to 2 for 2 NVME drives, 3 for 3, etc. lastly the /dev/nvme0n1 section is the name of the NVME drives. How many /dev/* drives you have in this section should be the amount the raid devices = is set to, or you will run into problems. If it created successfully you should get a response such as "mdadm: array /dev/md0 started."

cat /proc/mdstat This is to verify the raid setup is active

mkfs.ext4 -F /dev/md0 This formats the raid array to an ext4 format. replace /dev/md0 with your custom name if you changed it.

mkdir -p /mnt This creates the actual /mnt directory the plexguide uses to download and process files

mount /dev/md0 /mnt This mounts the raid array to the /mnt folder. You can access /mnt like a normal folder, but instead of the fields being stored on the boot drive it will be stored on your raid array. change /dev/md0 if you have a custom name.

df -h -x devtmpfs -x tmpfs This shows you all your mounted drives. If you did it right you should something such as:
"/dev/md0 1.5T 77M 1.4T 1% /mnt" which shows your raid array, available storage, and that it is mounted on /mnt

You now have a successful raid array to use with PG! Now for a sanity check lets save the raid array so that it mounts itself even when the server is rebooted, and verify it is worked. Without doing this, if your server ever reboots it will unmount the drive, requiring it to be remounted on every restart. Failure to remount it can cause alot of issues that leads you down a rabbit hole.

mdadm --detail --scan | tee -a /etc/mdadm/mdadm.conf Used to save the raid array in mdadm


nano /etc/fstab Used to open fstab in a text editor. You can use any text editor you like, I just prefer nano. You may have to run apt-get install nano if it is missing. fstab is your filesystem table, and you want to save your raid array in it. You want to add the following to the bottom of the file:
/dev/md0 /mnt ext4 defaults 0 0

again, you want to replace /dev/md0 if you have any custom naming. If you are using nano, use CTRL + X at the same time to exit. When it asks if you want to save type "Y" then press enter, enter

Now for some checkers! I mean the type to make sure everything worked properly.

mount -av shows an overview of fstab to verify you added the array correctly. You said see /mnt already mounted .

touch /mnt test.txt This creates a blank .txt file in the /mnt directory to verify you can write to it

echo "This is a test" > /mnt/test.txt This simply adds text to the text file you created above. Change the text how you like.

cat /mnt/test.txt This reads what is in the file. You should get a response saying "This is a test"

ls -l /mnt This shows you what in the /mnt directory. You should see the test.txt

now go ahead and reboot your server. Once its back online you should be able to run mount -av, ls -l /mnt, cat /mnt/test.txt again, with the same results as above. If that all worked your raid array with work across boots!

Once everything is ready to go, you can install plexguide normally. When it asks you if you want to change the processing disk, select "no". Once PG is setup, go ahead and start downloading, run netdata, and enjoy watching the madness!

Again, any suggestions to improve this guide or issues let me know. I want everything nailed out before adding this to the official wiki.
This is a great write up, I didn't read it all, its a lot :p

However I have seen the benefit of having 3x NVMe and can see faster extraction, repair, and a larger buffer for items that have not been updated to Google Drive yet. If you are going to do anything faster than 100MB/s 3x NVMe is great, I haven't gone over the need for 2x NVMe, and even with a ton of items in post processing, I haven't maxed out the available space. Anything between 1MB/s - 100MB/s I can really see the benefit of 2x NVMe in Raid0, it really ahs made a difference, especially from the space perspective while we wait for Sonarr/Radarr to process/move the file.
 

Somedude

Respected Member
Staff
This is a great write up, I didn't read it all, its a lot :p

However I have seen the benefit of having 3x NVMe and can see faster extraction, repair, and a larger buffer for items that have not been updated to Google Drive yet. If you are going to do anything faster than 100MB/s 3x NVMe is great, I haven't gone over the need for 2x NVMe, and even with a ton of items in post processing, I haven't maxed out the available space. Anything between 1MB/s - 100MB/s I can really see the benefit of 2x NVMe in Raid0, it really ahs made a difference, especially from the space perspective while we wait for Sonarr/Radarr to process/move the file.
Later on it mentions recommended specs for 2x and 4x nvme drives. It can be changed to 3, thats why I broke down the mdadm commands. That way people can adjust it to how they want to do it. Personally I'm rocking the 4 drive setup. I am getting sustained 200 MB/s speeds on most files over one GB, and am uploading as fast as im downloading. So the space really isnt even for a que anymore, just IO.
 

Cryptids

Respected Member
Staff
Donor
I've been attempting this but on reboot. I always get an error on reboot:

Connection Failed
We are unable to connect to the VM on port 22. Learn more about possible causes of this issue.

Is this down to my FSTAB stuff?
 

Cryptids

Respected Member
Staff
Donor
I would say it is, as I did it again without the step:
nano /etc/fstab Used to open fstab in a text editor. You can use any text editor you like, I just prefer nano. You may have to run apt-get install nano if it is missing. fstab is your filesystem table, and you want to save your raid array in it. You want to add the following to the bottom of the file:
/dev/md0 /mnt ext4 defaults 0 0

However obviously. I have had issues on reboot.
 

mixedvadude

Junior Member
Later on it mentions recommended specs for 2x and 4x nvme drives. It can be changed to 3, thats why I broke down the mdadm commands. That way people can adjust it to how they want to do it. Personally I'm rocking the 4 drive setup. I am getting sustained 200 MB/s speeds on most files over one GB, and am uploading as fast as im downloading. So the space really isnt even for a que anymore, just IO.
Are you just using this setup primarily as a downloading/uploading media machine, or are you also hosting Plex or Emby on the server as well?
 

Somedude

Respected Member
Staff
I've been attempting this but on reboot. I always get an error on reboot:

Connection Failed
We are unable to connect to the VM on port 22. Learn more about possible causes of this issue.

Is this down to my FSTAB stuff?

Something i learned recently which I forgot to add here and the wiki is that if you do a "reboot" command using the web ssh, it treats it as shutting down the VM and terminating it instead of the typical restart. To do a normal restart without issues you need to click on the instance then use the "reset" button on the top for it to do a normal software reboot. Same issue with the shutdown command in ssh, you need to use the "stop" button and "start" button in the google cloud compute console to do a full shutdown and startup without the instance being terminated. Its weird and confusing, but that is how they do it.
You can see some documentation about it here for restart and here for shutdown and boot up. Thank you for pointed it out so that others do not have to deal with it in the future.


Are you just using this setup primarily as a downloading/uploading media machine, or are you also hosting Plex or Emby on the server as well?
This is meant as a downloading/uploading machine. Google compute does charge for certain traffic that makes this ideal as a feeder machine, but not as a Plex server (although people have done it). Google does not charge for downloads, and uploading to Google platforms. This means downloading from Usenet and uploading to Gdrive does not use any paid bandwidth. Google does charge for uploading to anywhere else, starting at a rate of $0.12 per TB. This means that if you are streaming alot you will run out of credits pretty fast. But if you are a very light streamer, are not sharing with anyone else, and assume the risk of your plex server being shut down the second your run out of credits, it could be a extremely powerful plex server if you so wanted to. Just don't complain to anyone here or on Pgblitz if you run into problems with credits while using plex on GCE.

Side note: I don't follow traditional logic and am myself using plex on its own GCP account. I want to see how long it can last with this methodology with heavier usage. I am backing up daily, monitoring my credits, and do not plan on this staying a stable sever long term.
 

mixedvadude

Junior Member
This is meant as a downloading/uploading machine. Google compute does charge for certain traffic that makes this ideal as a feeder machine, but not as a Plex server (although people have done it). Google does not charge for downloads, and uploading to Google platforms. This means downloading from Usenet and uploading to Gdrive does not use any paid bandwidth. Google does charge for uploading to anywhere else, starting at a rate of $0.12 per TB. This means that if you are streaming alot you will run out of credits pretty fast. But if you are a very light streamer, are not sharing with anyone else, and assume the risk of your plex server being shut down the second your run out of credits, it could be a extremely powerful plex server if you so wanted to. Just don't complain to anyone here or on Pgblitz if you run into problems with credits while using plex on GCE.

Side note: I don't follow traditional logic and am myself using plex on its own GCP account. I want to see how long it can last with this methodology with heavier usage. I am backing up daily, monitoring my credits, and do not plan on this staying a stable sever long term.

Yeah I already have been using GCE as my host for my Plex server for the better part of a year now (about the last 10 months). I was just curious in this instance if you were using this 4x nvme server to ALSO host Plex on it.
 
Assists Greatly with Development Costs

Somedude

Respected Member
Staff
Yeah I already have been using GCE as my host for my Plex server for the better part of a year now (about the last 10 months). I was just curious in this instance if you were using this 4x nvme server to ALSO host Plex on it.
Haha that would be bother overkill and burn credits so fast I would have to rebuilt it every 3 weeks (ignore the fact I rebuild PG several times a week testing). I don't think there would be much of a benefit use a NVME for plex in the first place, but an SSD might help?

Also, would you mind creating another post describing your experience using GCE as a long term plex solution? Alot of people discredit it, and I want to consider all the pros and cons. Some of the details I would like are: which instance and drive setup, how many users, how much general usage, how long credits last, is bandwidth the main factor in burning through credits fast, how long does it take to create a new account and redeploy, have you used one card everytime or multiple cards, and is it really worth the hassle of rebuilding the server over time? You don't need to give all those details, but the more in depth you are the more useful it will be. I am exploring alot of the "non-recommended", "unorthodox", "how can that work" type setups to see how they can be used to push PG even further.
 

mixedvadude

Junior Member
Haha that would be bother overkill and burn credits so fast I would have to rebuilt it every 3 weeks (ignore the fact I rebuild PG several times a week testing). I don't think there would be much of a benefit use a NVME for plex in the first place, but an SSD might help?

Also, would you mind creating another post describing your experience using GCE as a long term plex solution? Alot of people discredit it, and I want to consider all the pros and cons. Some of the details I would like are: which instance and drive setup, how many users, how much general usage, how long credits last, is bandwidth the main factor in burning through credits fast, how long does it take to create a new account and redeploy, have you used one card everytime or multiple cards, and is it really worth the hassle of rebuilding the server over time? You don't need to give all those details, but the more in depth you are the more useful it will be. I am exploring alot of the "non-recommended", "unorthodox", "how can that work" type setups to see how they can be used to push PG even further.
Well my response to you was going to be that I already rebuild the server on a new GCE account about once every 1.5 to 2 months. I've toyed with different setups, last few times I've done setups that were estimated at $150+/mo.

And I use my Plex server myself everyday, as well as share it with my mother and also my father. They don't use it nearly as much as I do, but I've never noticed the credits to run out any faster than the actual estimated cost of my server specs would normally run (ex: $150/mo spec'd server lasting 2 months).

Plexguide's backup and restore feature is why I've gone this route for the past 10 months, essentially running a nicely powered remote Plex server and media grabbing machine for free with GCE trials. I use to have a nicely spec'd seedbox provided by Andy10gbit (for those who know him on reddit) that cost me $170/mo . Only drawback is I haven't used torrents as much since going this route (I'm apart of a couple private trackers), since I don't torrent on GCE.
 

mixedvadude

Junior Member
Something i learned recently which I forgot to add here and the wiki is that if you do a "reboot" command using the web ssh, it treats it as shutting down the VM and terminating it instead of the typical restart. To do a normal restart without issues you need to click on the instance then use the "reset" button on the top for it to do a normal software reboot. Same issue with the shutdown command in ssh, you need to use the "stop" button and "start" button in the google cloud compute console to do a full shutdown and startup without the instance being terminated. Its weird and confusing, but that is how they do it.
You can see some documentation about it here for restart and here for shutdown and boot up. Thank you for pointed it out so that others do not have to deal with it in the future.
This actually doesn't change anything as far as the outcome. I also ran into the "Connection Failed
We are unable to connect to the VM on port 22. Learn more about possible causes of this issue." last night.

So just now I did a fresh new instance reran through all the steps did a "reset" on the GCE dashboard and same error came up when I tried to SSH back into it. "Reset" acts just like typing "Reboot" in the command line (in fact the documentation link you provided very much says the same thing, that typing "sudo reboot" in the command line is the same exact thing)
 

Somedude

Respected Member
Staff
This actually doesn't change anything as far as the outcome. I also ran into the "Connection Failed
We are unable to connect to the VM on port 22. Learn more about possible causes of this issue." last night.

So just now I did a fresh new instance reran through all the steps did a "reset" on the GCE dashboard and same error came up when I tried to SSH back into it. "Reset" acts just like typing "Reboot" in the command line (in fact the documentation link you provided very much says the same thing, that typing "sudo reboot" in the command line is the same exact thing)
I will look into it when I have time. unfortunately I have been busy. I'll post here if I figure anything out.
 

KanKan

Junior Member
I've been attempting this but on reboot. I always get an error on reboot:

Connection Failed
We are unable to connect to the VM on port 22. Learn more about possible causes of this issue.

Is this down to my FSTAB stuff?
So from what I can tell it is the Plexguide install that is breaking the server. I can follow the above process and reboot the system 20 times and have no issue. as soon as I install plexguide it never comes back up. If you connect using the serial console it shows it stuck on boot up waiting for the mount to finish.
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads


Development Donations

 

Top NZB NewsGroups!

Members - Up To a 58% Discount!

Trending

Online statistics

Members online
9
Guests online
163
Total visitors
172
Top