Scheduled Maintenance - June 29th, 2019 Resolved
Priority - Medium Affecting Server - Lychee (Utah) NVMe SSD

The datacenter will be migrating from old network to  new network in New York City metro.

Expected Down Time: 2 hours.

Server data will remain intact and untouched. The server will simply be connected to a new switch. The server will remain powered on through the entire migration.

Scheduled Maintenance - June 29th, 2019 Resolved
Priority - Medium Affecting Server - Cherry (NJ) SSD

The datacenter will be migrating from old network to  new network in New York City metro.

Expected Down Time: 2 hours.

Server data will remain intact and untouched. The server will simply be connected to a new switch. The server will remain powered on through the entire migration.

Titas Server Schedule Maintenance Resolved
Priority - Medium Affecting Server - Jasmine (DAL) SSD

We are going reboot the server to update the server software. We'll post the update here.

We are going to reboot the server within next 5 mins.


We rebooted the server and it running fine now.

Faild Disks Change and Reinstall the Server Resolved
Priority - Critical Affecting Server - Cherry (NJ) SSD

Dear Customers,

We found both disks in the cherry server is failed. So we need to replace the disk. We've accounts backup and we'll restore accounts from backpu after replace the disks.

Update: Datacenter Tech powered down the server.

Update: Disk Has been replaced and OS reinstalled. Now we are installating softwares. After complete the installation, we'll start the restoration.

Update: We started the data restore. It will take 5-7hrs to complete the restore.

Update: The restoration is still running. 40% completed.

Update: The restoration is still running. 60% completed.

Update: The restoration is still running. 90% completed.

Update: The restoration is completed. If you found any data loss please open support ticket.

MySQL server down Resolved
Priority - Critical Affecting Server - Jupiter (LA) SSD

We found mysql server is down and we are working to resolve the issue. We'll update the status with further details.

We fixed the MySQL Server issue. It's up now.

Titan Server Down Resolved
Priority - Critical Affecting Server - Titan (DAL) SSD

We see titan server is down. We contacted with datacenter and waiting for their response. We'll update after getting the response from datacenter.

Update: Datacenter is facing power failure issue in their rack. They are working to fix the issue.

Update: Datacenter is replacing failed PDU in their rack.

Update: Server back online now.

 

Schedule Server Reboot Resolved
Priority - Medium Affecting Server - Padma (LA) SSD

This notification is to inform you of scheduled maintenance that may affect your ExonHost service. Please review the details of this notification to determine how it will affect your service. (all times UTC+6).

Event Type: Server Kernel Upgrad and Reboot
Start Time: 2017-11-20 12:00
End Time: 2017-11-20 12:20
Location(s): Los Angeles
Server: Padma
Client Affecting: Yes

Event Summary: Important maintenance will be performed on one of our server that may affect to your services. The server will be rebooted to upgrade server kernel. So you will face 10-20mins downtime.

Cherry server issue Resolved
Priority - Critical Affecting Server - Cherry (NJ) SSD

Cherry server is facing high load issue. We are working on the server to back normal everything.

We'll update the status soon.


Everything is back normal now. It was ddos attack. We removed the targeted account.

Srv3 Down Resolved
Priority - Critical Affecting Server - Julius (DAL) SSD

We are aware of server3 down. We contacted with data center and waiting for their update. We'll update the status page when we get more information.

Update: Server is online now after rebooting.

Schedule Reboot Resolved
Priority - Low Affecting Server - Padma (LA) SSD

We will be rebooting the server to complete the installation of CloudLinux.

 

Update: Server back online.

Lychee Server network maintenance. Resolved
Priority - Medium Affecting Server - Lychee (Utah) NVMe SSD

Network is currently down for maintenance.

The Maintenance Duration is : 04-14-2016 8:00 AM (EST) - 04-14-2016 12:00 PM (EST)

We are performing emergency network maintenance during the above window. This could result in a brief network outage and intermittent connectivity issues during the maintenance window.

The scope of work entails upgrading network distribution equipment.

Server3 Down Resolved
Priority - Critical Affecting Server - Julius (DAL) SSD

The switch on the cabinet where our server(s) is located has experienced an unexpected power failure. Our data center engineers are currently working on replacing the failed switch with a new one. We should have you back online within the next 30-40min, so please bare with us for a bit longer.

Orange Server Down Resolved
Priority - Critical Affecting Server - Venus (DAL) SSD

We are aware about the orange server down. Suddently server power got off and we started the server.

Currently FSCK is running. It should be back online after complete the FSCK. We'll update you after back online.

Update: FSCK is finished. Server is back online.

Server6 Down Resolved
Priority - Critical Affecting Server - Lion (Utah) NVMe SSD

We are aware about the server 6 down. We rebooted the server and server is running a quota check against the file system. It will return online once this is completed. No status output is available to show how far along in the process the server is.

Server4 Down Resolved
Priority - Critical

We are are aware about the issue. We are facing high load issue on the server.

Now we rebooted the server, OS is showing error. Our Tech is working to resolve the issue as soon as possibe.

Xen01 node reboot Resolved
Priority - Critical Affecting Server - node01

We are going to reboot Xen01 node patch xen hypervisor vulnarabilty. We are expecting 30mins downtime.

Server6 Rebooted Resolved
Priority - Critical Affecting Server - Lion (Utah) NVMe SSD

We rebooted the server. It should be back within 30mins.

DDos Attack Resolved
Priority - Critical Affecting Server - Julius (DAL) SSD

We are facing DDos attack on our srv3. Datacenter nullrouted the server IP.


Update: Attack is still on going at around 5.6 Gbps and 2.3 Million PPS. So datacenter extended the nullroute for next 4hrs.

Update: Nullroute has been lifted. All site is up now.

Server down Resolved
Priority - Critical Affecting Server - Padma (LA) SSD

We are facing kernal panic issue on this server. We are working on this issue.

Server7 Down Resolved
Priority - Critical

We are aware about the issue and working to fix the issue.

Update: Server is back online now.

Srv5 down Resolved
Priority - Critical

We are aware about server05 issue. We are working to resolve the issue as soon as possible.

Update: Server is now online.

Server6 Issue Resolved
Priority - Critical

We are facing disk issue on our server 6. We are working to fix the issue as soon as possible.

Server6 [Down] Resolved
Priority - Critical

We are aware about the server6 issue. We are working on thisand we are trying our best to resolve the issue as soon as possible.

Update: The issue has been resolved and all site is loading fine now.

DDos Attack Resolved
Priority - Critical Affecting Server - Venus (DAL) SSD

SRV1 server is currently undergoing a DDoS attack, and the main shared IP is nullrouted by the datacenter. We're working to resolve this now.

Update: We've changed our server main IP. So new IP can take 4-6hrs to propagate.

Filesystem Check Resolved
Priority - Critical

Server2 has suffered a kernel panic and has been rebooted.

The server is currently running a filesystem check and is expected to take around 30mins to complete.

Update: FSCK has completed and the server is back online.

DDos Attack Resolved
Priority - Critical


This server is currently under a DDOS attack and the server IP has been nulled. This will be removed in 4 hours.

Update: Datacenter lifted the IP null route. All site is loading fine now.

Server02 (Fl) Down Resolved
Priority - Critical

Server02 is currently down. We've contacted with Datacenter and waiting for response.

We'll post details after back the server online.


Update: server is now online. We'll post update after complete the investigation.

Server01 Reboot Resolved
Priority - Medium Affecting Server - Venus (DAL) SSD

Server01 has been rebooted. It can take 10-30 mins to back online.

DDos Attack Resolved
Priority - Critical

Our monitors have alerted us to a DDoS attack against Server 05. We are working on this as quickly as we can and sites should be back up and running shortly.

Update: The server is running fine now.

DDos Attack Resolved
Priority - Critical


srv5 is now under a DDoS and it's main IP has been nullrouted for a period of 30 minutes. After this, the nullroute will be removed and sites on the main IP will then be back online.

Server3 Nginx issue Resolved
Priority - Critical Affecting Server - Julius (DAL) SSD

We see Nginx is down now. So all sites doesn't opening now. We are working to fix Nginx issue.

The issue has been fixed. Please open a support ticket if you are facing any issue.

Srv2 Offline Resolved
Priority - Critical

srv2 is offline currently as it is undergoing a filesystem check (FSCK) process to ensure data integrity. We are watching the fsck process via remote console and will have it back online immediately as soon as we're ensured all data is intact.

Update: FSCK has completed. Server is online and operational.

Datacenter Network down Resolved
Priority - Critical

Server02 and Server06 servers is down now for network issue. Datacenter is working to resolve the issue. Here is the reply from data center.

"The issues being experienced at this time are network related and are being actively troubleshooted by our datacenter technicians."
"At this time, we have no exact ETA but we have our best technicians actively working to resolve the issue. Thank you for your patience."

Server is back online now.

Server2 down Resolved
Priority - Critical

We are seeing unauthorized access on our server. We are investigate the issue now. Typically in this case we will need to reinstall the OS in order to make sure there has been no malicious backdoors or shells left behind. We will update this ticket as more information is gathered in our investigation.

Update: We checked for any obvious signs of malicious files or injections. As nothing has been found so we back server online. We highly recommend you to change your CPanel, FTP, Database password.

Server 2 Emergency Reboot Resolved
Priority - Critical

We've found some service doesn't running correctly. So we rebooted the server and it can take up to 20 minutes.

Update: It seems now we are facing network issue. We contacted with datacenter and waiting for their response.

Update: Datacenter technician just replied us that there was Network adapter error and they solved the issue. Now server is loading fine.


Server 2 Down Resolved
Priority - Critical

We have rebooted the server and found physical disk issue, the server is going through a file system check at this time, we'll keep you update.

Update: fsck is still running, we'll keep you udate.

Update: 75% complete.

Update: Server is back online.

Server 6 Offline Resolved
Priority - Critical Affecting Server - BD01(Shared)

It appears our FL datacenter, Hostdime is facing network issue.
Their engineers are aware of the issue and are working on restoring service.
We will update here when server will be back online.
Update: server is back online. The outage was caused by inbound attack on a small portion of their network.

US8 Performance Issues Resolved
Priority - Medium Affecting Server - Venus (DAL) SSD

We are aware of performance issues with US8 (LA) and are working to resolve this as quickly as we can. Please bear with us.

Update: As per yesterdays email the server is now going down for a filesystem check.

Update: The check is now being run. We will update with the progress shortly.

Update: The check is currently at 45% complete.

Update: 76% complete

Update: The main fsck has finished and it's performing a quick second check before booting.

Update: The second check is at 60% complete

Now at 88%

The server is now booting

Local quotas are being checked so the server start is taking longer than normal.

The server is now up and running and will take a few minutes to settle.

US (Chicago) Server offline. Resolved
Priority - Critical Affecting Server - Padma (LA) SSD

We have just discovered a potential hacking effort targeting our server. Upon further investigation, it does in fact appear that server has been rooted. For your safeguard, We've taken public facing networking devices offline in order to prepare for our next steps. We will post update shortly.

Update: We are starting restore the server from our offsite backup.

Update: 50% completed.

Update: 80% completed.

 

The account restoration has completed. If you are facing any issue please submit support ticket.

US Server Los Angeles Resolved
Priority - Critical Affecting Server - Venus (DAL) SSD

This server is currently experiencing a small DDoS attack. We're working on mitigating this now.

US Server Los Angeles Resolved
Priority - Critical Affecting Server - Venus (DAL) SSD

US8 is currently offline. A reboot has been issued and the server should be back online within the next 20 minutes.

The cause of the downtime will be investigated once the server is up and running.

Update: The server is currently running a filesystem check following the reboot.

Update: The check is at 65% complete

Update: A second check is now running to ensure the filesystem is clean. As the first repaired any issues this should complete much faster.

Update: The second fsck is at 50%

Update: The server is now booting.

Update: Server is now up and running.

US (Master) Outage Resolved
Priority - Critical Affecting Server - Orange (DAL) SSD

Engineers were alerted to a system failure on the us(master) node. Upon investigation, engineers found the system had a kernel panic. Engineers rebooted the node and it is currently running fsck. Server will be online once the fsck is completed. The time required for the fsck process will vary but we expect around 30 minutes for it to complete. We will get back to you in this ticket with further updates shortly.


Server is online now and we are monitoring it. Please note that server will restart after some time when it re-calculates the disk usage quota.

Partial Network Downtime Resolved
Priority - Critical Affecting Server - Venus (DAL) SSD

We're currently experiencing downtime on servers US8 due to a network issue within WebNX, our Los Angeles provider.

Onsite techs are working on the issue and we'll provider more updated shortly.

 

The issue has now been resolved and was traced back to a faulty router.

A full RFO will be posted shortly.

US8 Server Packet Loss Resolved
Priority - Critical

We're currently experiencing packetloss at our Los Angeles location.

Our datacenter provider is aware of the issue and are working on resolving this as quickly as they can.

US (Master) Outage Resolved
Priority - Critical Affecting Server - Orange (DAL) SSD

Our Network engineers are currently working on the node as it isn't responding.

Update: Our engineers have indicated the server was unresponsive at console. Upon reboot the system forced itself into a maintenance shell and is requiring a file system check. We will update the tickets as more information becomes available.

US Server Reboot Resolved
Priority - Critical Affecting Server - Venus (DAL) SSD

US6 is performing a quick reboot after making some server changes to improve performance.

Update: Server is performing an fsck and is at 40% complete. It should be back up in 30 minutes.

Update: 70% complete

Update: The fsck is complete and the server is booting.

Update: Server is now online.

US server Downtime Resolved
Priority - Critical Affecting Server - Venus (DAL) SSD

We're currently aware of a problem with US6 and are working to bring it back online as quickly as we can.

Update: An fsck is currently being run on the filesystem

Update: The fsck is currently 60% complete

Update: It's still running the fsck. As it's taking this long we would assume there will be some filesystem damage. We're going to give it another 4 hours to complete and if it doesn't come back we will swap the RAID card and drives and start restoring from our backups.

Update: We're now swapping the hard drives and RAID card and starting the backup restore.

Update: The restore is now in progress. As anyone that has used R1Soft knows, it doesn't show an ETA for the restore. We're monitoring the bandwidth graphs and will provide a rough ETA once have transferred enough data to work out an average.

Update: So far 11GB has transferred out of around 200GB. We expect the restore to take a few more hours.

Updaet: 60GB restored.

Update: 75GB restored.

Update: It seems there was less data to restore than we thought. The restore completed at 100GB but the new RAID card seems to be incompatible with the restore. We're replacing it again with a card of the same model as previously used.

To clarify, once the RAID card has been swapped we should be back up and running. There is no more data to restore.

Update: The server is having trouble booting due difference in partition sizes between the old and new arrays. Onsite techs are trying to work around this but it's possible we may have to run the restore again.

Update: Unfortunately we're having to run the restore again. We'll post updates for every 25GB restored.

Update: We're at 26GB restored now.

Update: Restore falted earlier and had to be restarted. Since the restart, we are now at 50.5GB and going strong.

Update: 75GB restored.

Update: 100 GB restored.

Update: 125GB restored

Update: The restore is completed and we're making some kernel adjustment to allow it to reboot.

Update: The server is in the middle of performing an fsck - this is normal after a bare metal restore and it should last around 30 minutes

Update: 80% complete.

Update: A second fsck is running and is at 65%

Update: The server is now up and running and we're checking the status of services now.

Update: Litespeed is refusing to start due to the hardware swap. Their licensing system isn't allow the key to be used. We're working around this now.

Update: Litespeed has been started.

Update: We're swapping some cables around to restore network connectivity for dedicated IP customers. Server will be offline for 5 minutes.

 

USA Server Resolved
Priority - Critical Affecting Server - Venus (DAL) SSD

There seems to be a Network issue which is causing packetloss on USA server. We have found that it is a syn_flood and will be resolved as soon as possible.

Update: We're blocking the IPs attacking us but this will take a little while to fully process.

UK Premium Server Drive Swap Resolved
Priority - Medium

cPanel9 is currently offline for a drive swap to resolve recent performance issues - this should take around 20 minutes and the server will be rebooted.

Update - The server is now booting back up again and the RAID is rebuilding.

Server Status

Below is a real-time overview of our servers where you can check if there's any known issues.

Server Name HTTP FTP POP3 Server Load Uptime
Apollo (UT) NVMe
Banana (DE) NVMe
Cherry (NJ) SSD
Corporate (LA)
Jasmine (DAL) SSD
Jupiter (LA) SSD
Lemon (AZ) SSD
Lion (Utah) NVMe SSD
Lychee (Utah) NVMe SSD
Neptune (Utah) NVMe
Oliver (Tampa) NVMe
Orange (DAL) SSD
Padma (LA) SSD
SG1 (SG) SSD
Tiger (DAL) SSD
Titan (DAL) SSD
Venus (DAL) SSD

Powered by WHMCompleteSolution