When you first build a website the database may seem fast. All your queries may get executed quickly. But, after a while when the database is much larger the queries may start taking longer. If you haven’t already, it may be time to optimize the MySQL queries!

This is a quick guide to optimizing MySQL queries. It’s more general and theoretical than being a step-by-step tutorial.

In the Beginning

Creating your tables properly in the first place will save you headaches down the road. In the beginning, when you are creating your tables make sure that all the columns are of the right type (INT, VARCHAR, TEXT, ENUM) and that they have a size where possible. For some columns, it’s better to use varchar than text because varchar is limited to 255 characters while text is pretty much unlimited. However, you’ll probably need some text columns too so it’s just a case of using them in a way that should be relatively painless as the site grows.

Explain

Put EXPLAIN before your MySQL query to find out what is going on. It will give you various useful information such as the type of query MySQL is running on each table in the query and the number of rows it is searching through.

The possible types are (good to bad)…

  1. const/eq_ref
  2. ref/range
  3. index
  4. all

It’s better to have the type “index” than the type “all”, but “eq_ref” is better still. Having a type of “all” is the worst thing for your query.

Select Explicitly

Select explicitly, don’t use SELECT *. The worst thing to do is return everything when you have a large number of columns in the table and you only need one or two columns. Only selecting what you need will make a more manageable MySQL query.

If you are doing a large search and returning a lot of rows, is there a better or quicker way to do what you are trying to do. One possibility might be to only return the id of the rows then if you need more information you can get it later. That’s just a possibility and would depend on the query and what you needed to do.

Remove Functions

In some RDS databases you are able to use functions and still be optimized, not in MySQL. Using MySQL functions in the query means that an index won’t work.

This is an example of using the year() function. This is bad…

SELECT id FROM blog WHERE YEAR(date)='2016' ORDER BY id DESC

It’s better to use BETWEEN for dates…

SELECT id FROM blog WHERE date BETWEEN 2016-01-01 AND 2017-01-01 ORDER BY id DESC

or

SELECT id FROM blog WHERE date >= 2016-01-01 AND date < 2017-01-01 ORDER BY id DESC

Indexing

So, you’re selecting only what you want from the query and not using any functions, it’s probably time to index!

The aim is to not have any “all” types for any of the tables when you run the EXPLAIN on your query. You might also be able to get the number of rows searched down, but that might not be possible.

Simply, you put everything from a table in a query into the index.

The order of the indexed columns matters!

Run the query you are trying to optimize in MySQL or PHPmyAdmin making a note of how long the query takes to execute. Add the indexes, run an explain, then tweak the indexes, or if they look ok, run the actual query. When you run the query after optimizing it should be quicker, or at least not worse. If the query is worse your index may have columns in the wrong order.

One thing to remember is that making an index for one query might speed up that query but if another query uses the same index it may actually slow that other query down.

Also, indexing properly should speed up SELECT queries but it will have the opposite effect on UPDATE and INSERT statements.

Limit

LIMIT doesn’t necessarily mean you only search that number of records. You may still be searching all the records then discarding the rest. Or, you may only be searching that number of records when you need to search the entire database. Use LIMIT with care!

This is a simple step-by-step guide to making a PHP composer package that can be listed publicly on packagist.org for anyone in the world to install. We’ll be using Github to update the package. We’ll also add some testing with PHPUnit.

Creating a Composer Package

The basic steps to creating a new composer package are as follows.

  1. Create a Github Repo for the project
  2. Clone the Github repo locally
  3. composer init
  4. composer install
  5. Write the PHP composer package, put the PHP into a src directory.
  6. Commit and push to Github
  7. Give the package a version by using git tag 0.0.1 then git push --tags
  8. Login to Packagist and add the new Github repo to your packagist account.
  9. Make sure all the info packagist.org needs is in your composer.json

After following these steps, your package should now be published on packagist. Any future changes you make should be pushed to Github and make a new tag like this… git tag 0.0.2 and git push --tags. You can list all the tags with git tag. Every time you update in this way, Github will get updated, and packagist will also get updated automatically.

Our class is called Bar, so our main PHP file has to be Bar.php (upper/lowercase matters!). We’ll put it in a directory called “src”…

<?php 
namespace Foo;
class Bar {
    public function helloworld(){
        return 'Hello, World!';
    }
}

Here is a sample composer.json file. Our namespace is “Foo” so we say that Foo is in the src directory in the composer.json…

{
    "name": "foo/bar",
    "license": "MIT",
    "require": {
        "php": "^7.0"
    },
    "require-dev": {
        "phpunit/phpunit": "^5.7"
    },
    "autoload": {
        "psr-4": {
            "Foo\\": "src/"
        }
    }
}

To use the package, import it by copy/pasting the command line instructions from Packagist. It’ll be something like this… composer require foo/bar. Then, once the package has been installed into the vendor directory, you can start using it, like this, for example…

<?php

require_once 'vendor/autoload.php';

$test = new Foo\Bar();

$test->helloworld();

Updating the Package

For testing purposes. Each time you update you need to make sure the latest version is downloaded from Packagist.

Make sure the composer.json of the project you’re inporting the package into has a composer.json like this. You’ll need to make sure the package you’re testing is greater than or equals to >= instead of ^which specifies an exact version.

{
    "name": "neil/test",
    "authors": [
        {
            "name": "neil",
            "email": "[email protected]"
        }
    ],
    "require": {
        "foo/bar": ">=0.4.3",
        "phpunit/phpunit": "^6.5"
    }
}

But, even then composer update may still not do anything when you update the package. You may need to composer clearcache first, then update composer. Also, sometimes there is a short lag in Packagist updating so don’t get too worried if it doesn’t update straight away first time.

Testing the Package

To add testing you might want to use something like PHPUnit or PHPSpec. This is using PHPUnit 6.5 which runs with PHP 7.0…

composer require --dev phpunit/phpunit

Make a directory called tests and make a file called BarTest.php…

<?php
declare(strict_types=1);

use PHPUnit\Framework\TestCase;
use Foo\Bar;

final class BarTest extends TestCase
{

    public function testOutputsExpectedTestString()
    {
        $this->assertEquals(
            'Hello, World!',
            Bar::helloworld()
        );
    }
}

Then, making sure you’re using the “dev” packages you can run the test in the command line like so…

vendor/bin/phpunit --bootstrap vendor/foo/bar/src/Bar.php vendor/foo/bar/tests/BarTest

Unit tests will only work on public functions, not private functions.

Troubleshooting

The name of the class must be exactly the same as the filename, and vice versa. If your class is called SomeClass, the file must be SomeClass.php.

Errors can also come from not using git tag to create a version or not having all the info packagist needs in the composer.json.

To create a bash script that will work only for your user, you can store the bash files in your user’s home directory. The standard place to put them would be a folder called bin. Create it if it does not exist, then create the file. The name of the file is the name of the command you want to type to run it. So, if I want to call my command “commandname”, I would do…

mkdir ~/bin
sudo nano ~/bin/commandname

Then, create the script with the shebang! at the top…

#!/bin/bash

# Update, upgrade, then restart
apt-get -y update
apt-get -y upgrade
apt-get autoremove
service apache2 restart

# update WordPress through WP-CLI
cd /var/www/html
wp core update
wp plugin update --all

Now, make the file executable…

sudo chmod +x  ~/bin/commandname 

Then, to run the file from any directory, you’ll have to update your user’s .profile file…

sudo nano ~/.profile

Adding the following to ~/.profile tells linux that there are executable scripts in the ~/bin directory…

PATH=$PATH:~/bin
PATH=~/bin:$PATH

Now you should be able to run the command, commandname, from any directory.

You can run this manually from the command line or you can create a cron to run it at regular intervals.

1 * * * * /bin/bash -c "~/bin/commandname"

Then reload the cron with…

sudo service cron reload

You can monitor the cron log in real-time with tail…

tail -f /var/log/syslog

Trying to connect to use AWS S3 for the first time can be confusing. Here is a quick guide to roughly what has to happen.

Basic Steps to Set up S3

The only two sections you need in the AWS console for this are “S3” and “IAM”…

  • Create a S3 bucket.
  • Make a S3 bucket policy.
  • Create a policy to access the bucket in IAM.
  • Create a “programmatic-only” user for the bucket and attach this policy to it (IAM).

Store the info for your user (the secret will not be displayed a second time). You’ll use this to connect with the S3 instance in your code.

Easy, huh?

AWS is pretty good at telling you when you’re about to do something stupid during this process. There are plenty of warning signs on the screen if you make anything public. AWS is all about security and particularly dislikes us making things public that should not be public.

The bucket policy and user policy are both in JSON format.

There is a website to help you make the correct JSON but the form itself is pretty good at telling you if there is an error in your policy or you’ve made your bucket public. An example Bucket policy might look like…

{
    "Version": "2012-10-17",
    "Id": "Policy123465789",
    "Statement": [
        {
            "Sid": "Allow ALL access to the bucket by one user",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111111111111:user/myusername"
            },
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::my-bucket-name",
                "arn:aws:s3:::my-bucket-name/*"
            ]
        }
    ]
}

The user policy (IAM) is created by a wizard but then if you want to edit the policy you see that it is also a piece of JSON. Each action is listed in the JSON so it would be very easy indeed to simply delete the actions in the JSON, or just unselect a specific action when you’re creating the policy.

AWS S3 Docs

The AWS S3 docs for PHP are pretty extensive but getting to the exact thing you need is not always straightforward. I found that I was making a lot of google searches in order to find what I was looking for because the navigation wasn’t that great.

This page on the AWS SDK for PHP S3StreamWrapper was particularly well laid out and useful. The main AWS SDK for PHP docs are pretty extensive as long as you know the name of the function you want to use.

Also, if you do a search for instructions on how to do a certain thing, like connect to S3, make sure the article is fairly up-to-date. Earlier versions allow you to connect in different ways that may not always work with the current version of the AWS SDK. Other things may be similar/the same.

As with most programming, there are often multiple ways to complete the same task. For example, in the example below there are at least two ways to output the JPEGs to the screen via PHP (see comments in the code).

You can use this PHP code in any AWS instance that you can code PHP in. I added this to my Lightsail instance but it would also work on EC2 or with any other non-AWS hosting…

S3 Gallery and JPEG Displayer Example

This is a simple code to turn every file in every bucket into a gallery. This is the code to create the list of “thumbs”. In this case, the thumbs are the large image but made smaller with CSS.

<?php

// Require the Composer autoloader.
require '../vendor/autoload.php';

use Aws\S3\S3Client;

try {

    // Instantiate the S3 client with your AWS credentials
    $s3Client = S3Client::factory(array(
        'version' => 'latest',
        'region'  => 'eu-west-2',
        'credentials' => array(
            'key'    => 'unique_string', // From AWS IAM user
            'secret' => 'unique_secret_string' // From AWS IAM user
        )
    ));

    //Listing all S3 Bucket
    $buckets = $s3Client->listBuckets();
    foreach ($buckets['Buckets'] as $bucket) {
        $bucket = $bucket['Name'];
        $objects = $s3Client->getIterator('ListObjects', array(
            "Bucket" => $bucket
        ));

        // Show each one 200x200 and link to full-size file...
        foreach ($objects as $myobject) {
            echo "<p><a href=\"/showitem.php?item={$myobject['Key']}\"><img src=\"/showitem.php?item={$myobject['Key']}\" style=\"height: 200px; width: 200px;\"></a></p>\n";
        } // end foreach
    } // end foreach


}catch(Exception $e) {
    // Only show this for testing purposes...
   exit($e->getMessage());
}

Then, to display the files from the S3 bucket, we do not want to have the AWS S3 URL in the browser, so we’re going to display the images through PHP with the showitem.php file. Here is the code for that file, it’s a very simple image displayer

<?php

// Require the Composer autoloader.
require '../vendor/autoload.php';

use Aws\S3\S3Client;

$bucket = "my-bucket-name";

try {

    // Instantiate the S3 client with your AWS credentials
    $s3Client = S3Client::factory(array(
        'version' => 'latest',
        'region'  => 'eu-west-2',
        'credentials' => array(
            'key'    => 'unique_string', // From AWS IAM user
            'secret' => 'unique_secret_string' // From AWS IAM user
        )
    ));

    $s3Client->registerStreamWrapper();

    if(isset($_GET['item'])){
        $keyname= filter_var($_GET['item'], FILTER_SANITIZE_STRING);

        // Get the object.
        $result = $s3Client->getObject([
            'Bucket' => $bucket,
            'Key'    => $keyname
        ]);

        // Display the object in the browser.
        $type = $result['ContentType'];
        $size = $result["ContentLength"];
        header('Content-Type:'.$type);
        header('Content-Length: ' . $size);
        echo $result['Body'];

        // Alternatively, get file contents from S3 Bucket like this...
        // $data = file_get_contents('s3://'.$bucket.'/'.$keyname);
        // echo $data;
    }

}catch(Exception $e) {
    // Only show this for testing purposes...
    exit($e->getMessage());
}

When you add a file to S3, you’re probably either doing it programmatically, or you’re dragging and dropping into the S3 Browser. When you want to use the file, you’ll find that you can get quite a lot of information from the getObject() method that is listed in the docs. I wanted to find the exact response for content length, so you would just look up getObject and see what is returned.

This is a very quick example. There are some pretty major things here that would be better doing them a different way. For example, when we connect to S3 with the factory() method, we should probably use the .aws/credentials file to make one or more profiles so that our secret info isn’t listed in the PHP of public part of the website.

Also, S3 can be accessed with the AWS CLI

I had a PHP website that was the last man standing on some shared hosting that was slow that didn’t have SSH access. I decided that since I would be moving the website anyway, why not try something different with it. The website in question is PHP/PDO/MySQL with no framework.

Having already tried AWS Lightsail App+OS, I wanted to experiment with the Lightsail “OS Only” option. What better thing to do than to install the Nginx web server on it? Starting from scratch with an OS Only box I would be able to take a look at another side of AWS Lightsail (without Bitnami) and also learn about Nginx and using a LEMP stack.

I created a new instance of Lightsail with Ubuntu 18.04 LTS. It was exactly the same as most VPS that only come with Linux installed. After installing Nginx on Lightsail, the version of PHP I got from by installing PHP-FPM was PHP 7.2.10.

Link… https://www.digitalocean.com/community/tutorials/how-to-migrate-from-an-apache-web-server-to-nginx-on-an-ubuntu-vps

But, we’ve jumped ahead. Let’s look at Linux…

First Things First: Linux

The first thing to do is to update and upgrade Linux. Sudo was already installed, so it’s straight into…

sudo apt-get update
sudo apt-get upgrade

I believe that while Debian is a pretty bare bones install, Ubuntu comes with a lot of stuff pre-installed such as sudo and nano, which is very convenient.

Nginx

With Linux updated, we can get the next part of the LEMP stack installed. That would be Nginx (Engine-X). This is the only part of the stack that I’ve not had much experience of in the past so was most interesting for me. I was expecting it to be slightly more different to Apache than it was. Apart from the different style of the config file...

sudo apt-get install nginx

Now, magically the unique domain you have in your Lightsail console will work in your browser, giving you a page like…

Welcome to nginx!
If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

Doing some prep for when we change the DNS and point the domain name at this hosting, we should also make a config file…

The Nginx Config File

The config file should be the name of the site and should be created in /etc/nginx/sites-available. Copy the info from the default file across to the new site, changing the domain name. You can then set up a symlink to “sites-enabled”…

sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/

https://www.digitalocean.com/community/tutorials/how-to-install-linux-nginx-mysql-php-lemp-stack-ubuntu-18-04

sudo unlink /etc/nginx/sites-enabled/default

Test the new Nginx config, then restart to load the new settings with…

sudo nginx -t
sudo systemctl reload nginx

Change permissions of the /var/www/htdocs/ folder then upload files. Convert htaccess…

https://winginx.com/en/htaccess

Something like this will eventually need the * removing for nginx…

<Files *.inc>
Deny From All
</Files>

becomes…

 location ~ .inc {
deny all;
}

# I also had to convert the "break" to "last" on the mod_rewrite...

rewrite ^/(.*)/$ /item.php?item=$1 last;

Then, add the code to the example.com config file and test it again. Any duplicates will need commenting out with #. In the config file \.php causes an error, remove the slash.

PHP

Now, we can install PHP and MySQL to complete our LEMP stack…

sudo apt install php-fpm php-mysql 

Luckily, the site I was moving had pretty modern PHP with nothing that needed fixing at all. Uploaded a file with the function phpinfo() on it to test that PHP is working. All good!

Nginx Default Log File

index.php not working! Look at the log file…

tail /var/log/nginx/error.log

Yes, the PHP was fine, it turns out that PDO was unhappy that I hadn’t added the database yet…

MariaDB

Finish off getting MariaDB installed, then check it’s working…

sudo apt install mariadb-client-core-10.1
sudo apt install mariadb-server
sudo systemctl status mariadb

I was getting the error, below…

ERROR 2002 (HY000): Can’t connect to local MySQL server through socket ‘/var/run/mysqld/mysqld.sock’ (2 “No such file or directory”)

So, I did a “locate” for the my.cnf file and the “mysqld.sock” file and added this to the mysql/mariadb config file, my.cnf…

socket  = /var/run/mysqld/mysqld.sock

Then…

sudo service mysql restart

Login for the first time with sudo…

sudo mariadb -uroot

Now you can create the database and database user for the app.

https://stackoverflow.com/questions/5376427/cant-connect-to-local-mysql-server-through-socket-var-mysql-mysql-sock-38

SSL Encryption

Pointed domain name at the public IP address with CloudFlare. Server-side SSL encryption to follow.

https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-18-04

Simple Password Authentication

https://www.tecmint.com/password-protect-web-directories-in-nginx/
sudo apt install apache2-utils
sudo htpasswd -c /etc/nginx/.htpasswd username

Then put the following in the “location” you waant to be protected in the config file…

auth_basic "Administrator Login";
auth_basic_user_file /etc/nginx/.htpasswd;

Then, test and restart NginX.

Force or Remove WWW

For some sites I prefer to keep the www in, so I did the opposite of this on this occasion…

server {
server_name www.example.com;
return 301 $scheme://example.com$request_uri;
}
server {
server_name example.com;
# […]
}

https://stackoverflow.com/questions/11323735/nginx-remove-www-and-respond-to-both

index.php Downloads instead of Displaying

Sometimes the index.php, or ay PHP files can start downloading instead of displaying normally in the browser. The fix for this is to pass the PHP scripts to the FastCGI server. Make sure you use the correct filepath, below is for PHP 7.2 but it will be different for different versions of PHP…

server {
listen 80;
listen [::]:80;
root /var/www/myApp;
index index.php index.html index.htm;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/run/php/php7.2-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}

Debug Mode

To show extra info in the error.log file add the word “debug” to the error_log statement…

error_log  /etc/nginx/error.log debug;

Example nginx.conf file

This file is taken from here. Shows SSL encryption…

server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name www.example.com;
ssl on;
ssl_certificate /root/certs/APPNAME/APPNAME_nl.chained.crt;
ssl_certificate_key /root/certs/APPNAME/ssl.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:20m;
ssl_session_tickets off;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK';
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /root/certs/APPNAME/APPNAME_nl.chained.crt;
root /srv/users/serverpilot/apps/APPNAME/public;
access_log /srv/users/serverpilot/log/APPNAME/APPNAME_nginx.access.log main;
error_log /srv/users/serverpilot/log/APPNAME/APPNAME_nginx.error.log;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-SSL on;
proxy_set_header X-Forwarded-Proto $scheme;
include /etc/nginx-sp/vhosts.d/APPNAMEd/.nonssl_conf; include /etc/nginx-sp/vhosts.d/APPNAME.d/.conf;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name example.com;
ssl on;
ssl_certificate /root/certs/APPNAME/APPNAME_nl.chained.crt;
ssl_certificate_key /root/certs/APPNAME/ssl.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:20m;
ssl_session_tickets off;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK';
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /root/certs/APPNAME/APPNAME_nl.chained.crt;
root /srv/users/serverpilot/apps/APPNAME/public;
access_log /srv/users/serverpilot/log/APPNAME/APPNAME_nginx.access.log main;
error_log /srv/users/serverpilot/log/APPNAME/APPNAME_nginx.error.log;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-SSL on;
proxy_set_header X-Forwarded-Proto $scheme;
include /etc/nginx-sp/vhosts.d/APPNAME.d/.nonssl_conf; include /etc/nginx-sp/vhosts.d/APPNAME.d/.conf;
}

Conclusion

Website moved and working as it did before, but possibly slightly faster.

In just the short time I have been using it Nginx is already growing on me. I like the simplicity. I like that it is quite similar to Apache in some ways. And, I like the fact that it should be faster than Apache.

Not being able to use .htaccess files, and the Nginx config being different to Apache virtualhost files was not bad at all. A combination of using a htaccess-to-Nginx converter and Google/Stackoverflow has already taught me a lot of how to replicate what I might do with an .htaccess or virtualhost file with Nginx.

As expected, the “OS Only” version of AWS Lightsail was much more like a standard VPS and there was nothing too hard in setting it up and moving a site across and onto Nginx.

AWS Lightsail is the closest thing AWS has to shared hosting. It is their quick, easy and inexpensive off-the-shelf hosting that has SSH access and many of the benefits of using a more expensive EC2 instance.

It is an affordable entry in cloud computing, but is it any good?

I decided to try out Amazon’s cheapest hosting offering by moving a WordPress blog from some shared hosting to Lightsail. I have tried EC2 in the past so this was not my first experience with AWS, but I was curious to see what their new more consumer-based hosting was like.

Creating an Instance

When you create an instance of Lightsail you chose how big or small you want it. You also chose whether you want just the OS, or you can have an app pre-installed with Bitnami (“App+OS“). The options for the app include a pre-installed WordPress blog, LAMP, MEAN, LEMP or several other applications. Alternatively, you can choose “OS Only” where you currently have the choice of either Windows or Linux flavors: Amazon Linux, Ubuntu, Debian, FreeBSD, openSUSE and CentOS.

I went with the PHP 7 LAMP stack option in the smallest size ($3.50 per month). I chose this option because I wanted to make sure WordPress was exactly the way I wanted it. And I wanted to see what the LAMP option was like.

In the price you also get a dedicated IP which makes setting up a breeze before pointing the domain name at the new instance, definitely a nice touch.

The LAMP 7 option came with PHP 7.1. But it’s possible to upgrade. All the elements of LAMP come pre-installed (Linux, Apache, MySQL and PHP) but you’ll want to configure them to your needs.

The main thing you can say about the setup is that it was lightening fast. Within seconds I had a fully operational instance. In the past, when setting up some hosting you might have assumed it would take at least a couple of days. Because the dedicated IP is plainly visible on the AWS console, you can immediately see the default index page in your browser.

First Look at Lightsail App+OS

The main difference between the “App+OS” option and a normal VPS is Bitnami. You notice right away is that the default username is “bitnami” and after logging into the Linux console you get a large “Bitnami” logo at the top of the screen.

So, what is this Bitnami?

Bitnami

Amazon AWS has so many quirks that you might assume that Bitnami is an AWS thing, as is their own Amazon Linux, but it is quite widely used in cloud computing (including Oracle Cloud and Google Cloud Platform).

With the “App+OS” Bitnami a lot of the things you normally have to do to set up a LAMP stack are already done for you. For example, Apache is pre-installed with most/all modules and even MySQL is pre-installed. However, to find your root login for MySQL you’ll need to look for it, see below.

With Bitnami other slightly unusual thing you notice upon logging into SSH or SFTP is the directory structure, the apache.conf does not look the same as normal, and where are the virtual host files?

Bitnami uses httpd-app.conf, httpd-prefix.conf and httpd-vhosts.conf files, as described here.

This is unusual and I imagine many people who do not want to use Bitnami would want to use the “OS Only” option. While it may take a little longer to set up, once that’s done you have a “normal” Bitnami-free Linux instance.

Transferring a Website to Lightsail

Having gone with the LAMP (PHP 7) option I basically followed my guide from here to move a WordPress blog over to different hosting. With minimal setup to do it was mainly a case of setting up the database, installing WordPress then using WP-CLI to install the plugins and theme.

As the instance was just going to be hosting one website I didn’t have to worry at all about the virtualhosts as everything was set up to just work from the off.

My first question was how do I login to MySQL. The login info for MySQL did not appear to be anywhere in the AWS console. To find the password for the root user you need the Bitnami Application password. From the home directory (where you arrive after logging in) just type…

$ cat bitnami_application_password

Transferring everything across, most things just worked. While PDO worked fine in normal PHP pages, I had to tweak the php.ini to get PDO to work from a script run with cron. For me, I just had to uncomment the .so file for PDO which was almost the last line of the php.ini.

After changing something like the php.ini you’ll have to restart. The following command seems to stop everything (apache/HTTPd, PHP and MySQL, ), then restart everything; perfect for making sure everything gets restarted all at once but not very graceful (from here)…

$ sudo /opt/bitnami/ctlscript.sh restart

To just restart apache you’d just add “apache” to the end…

$ sudo /opt/bitnami/ctlscript.sh restart apache

Linux

While some things are very different in Bitnami, it’s basically just a Linux instance. The Linux version I got with the LAMP (PHP 7) option was actually Ubuntu 16.04, so if you want the latest version of Ubuntu (18.04 is currently the latest LTS), or a different flavor of Linux, chose the “OS Only” option. I am most comfortable with Ubuntu/Debian and a lot of the standard CLI functions are exactly the same as Ubuntu.

Nano comes pre-installed and was the default editor for the crontab.

$ crontab -e

BTW, cron needs the full path to php, i.e. something like…

* * * * * /opt/bitnami/php/bin/php -f /opt/bitnami/apache2/htdocs/scripts/index.php "name_of_method()"

Then…

$ sudo service cron reload

The timezone is quite important because it can also affect your keyboard layout when typing into the Linux terminal. Changing the timezone is based on Ubuntu 16.04, so something like this would work to list the timezones, select a timezone then check which timezone you’re using…

$ timedatectl list-timezones   
$ sudo timedatectl set-timezone America/Vancouver  
$ timedatectl

Now, that the Linux timezone is set, you may also need to update the timezone PHP uses by updating this line in the php.ini…

date.timezone="Europe/London"

For all the PHP timezone variables, click on your region from the PHP timezones page.

Something else that is the same as Ubuntu is updating and upgrading…

$ sudo apt-get update
$ sudo apt-get upgrade

Once you get used to the quirks and the different directory structure with Bitnami, most things seem the same as a typical Ubuntu instance.

Issue(s) with AWS Lightsail

The first “upgrade” was a large one which took a while. It took so long in fact that either putty went inactive, or my computer went to sleep, or both. After this, the website went down and I had no access to SSH. What I seemed to have to do was not “reboot” the instance, but “stop” and “start” the instance from the AWS console. After this, I had a different public IP address but I was able to fix whatever had happened with the upgrade.

If the restart is the opposite of graceful, stopping and starting was similarly very ungraceful, comparable to doing the same thing with any VPS instance.

Apart from some minor changes that will probably be easy to get used to, I did not have many issues at all.

AWS Lightsail App+OS: Conclusion

Bitnami saved some time during setup, but honestly, any time I saved was probably offset by time spent figuring out what was going on with Bitnami.
I’m not 100% sure that the speed of setup of Bitnami is worth the changes it makes to the Linux operating system. For something like this example, a WordPress blog, that isn’t going to need a lot of administration, the “App+OS” option was fine though.

If you are a purist and don’t mind setting up the Linux instance with everything you need there is always the “OS Only” option which I don’t believe uses Bitnami. This would be better for a website where you’re going to want to make more changes to the Virtualhost file and/or possibly upgrade to an EC2 instance in future. If you are already a full stack LAMP developer you’ll probably be wanting to use that option for any actual development. App+OS seems to be mainly for people who do not want to get too involved with the “L” or “A” parts of LAMP.

AWS Lightsail with the App+OS option is perfect for someone who just wants to have a cheap WordPress blog running on AWS, as I did here. For a brand new blog, choosing the “WordPress” option would simplify the whole process even more.

I’d say App+OS might also be a good way to play around with something new such as MEAN before starting an actual project with it. Everything would be pre-installed so you could get straight into the javascript and the NoSQL.

So far so good. The instance seems fast for a WordPress blog, it certainly is compared to the previous shared hosting. And, very affordable.

Once upon a time you could buy a domain name and hook it up to some cheap, shared hosting and that was all you had to do. You could build your website or install a WordPress blog and no further configuration was really required. These days you can still do this but you are leaving yourself open to security, speed, and privacy issues. Surfers becoming more aware of which browsers are safe and which aren’t through information from their browser and anti-virus programs. But not only this, search engines are also starting to penalize websites which do not protect the surfer’s privacy, are slow or are insecure.

There has been a desire to improve search engine rankings by doing SEO work for a long time. Now, in 2018, SEO is different to how it was 15 years ago and it’s importance is joined by security, speed and privacy as the four things everyone should be looking into. I have labeled each header to show which of the four sides the technology is used for.

These are just my thoughts at the moment, much of it will be my opinion. There are people who know much more about everything here so my advice would be to do more research before making any changes to your websites.

Best Practices for Websites in 2018

These best practices are the current general buzzwords for all websites that I think people may be slow to adopt. These should be added to the specific best practices for whichever kind of website you have concerning permissions, owerships, coding, code injection, etc.

This guide is just a quick look at all the topics. I may have missed some topics out. There is a short discussion and generally link(s) to follow to get more information, or tutorials to follow.

If you use a CDN, like CloudFlare, some of these may already be done without you having to think about it but they are good to know about, especially if you do not use a CDN. Also, if you use managed or shared hosting you may not be able to change some of these, but they may already be done for you by your hosting company.

Here are some best practices for websites in 2018. Some of these used to be nice-to-haves but are fast becoming must-haves, if they are not already.

Google Audit (speed)

Google Audit on the Chrome browser has replaced Google Pagespeed and offers a lot more detail than before as to how Google views your website.

Much of what Google Audit looks at is the speed of your website, especially over mobile networks. So, it wants the content that first appears on the browser to appear very quickly, loading content from further down the page afterward. The Audit is mainly the content of the website and how quickly it loads. The harshest test will be to run it in mobile mode with “3G/PC throttling” switched on. Google wants the above the crease content to be displayed quickly even on slow 3G.

Every time I do an Audit I have to be prepared for it to be painful reading. The good thing is that it highlights issues that you might not have seen, or you might have thought that they were fixed.

SSL/HTTPS (privacy)

Enabling SSL encryption and forcing your website to use HTTPS has hit the headlines, mainly because of the changes to browsers which are meaning that HTTP-only websites are starting to look like bad places to visit.

The docs for Apache SSL are here… link

The disadvantage with just using SSL encryption by itself is that the website can often be much slower than it would have been without the encryption due to the extra handshakes that are needed with HTTPS. But, there are ways to further tweak HTTPS that will improve the security and also the speed of HTTPS websites. Most of these changes can be made to the SSL config file in Debian if you add them to another file (e.g. apache2.conf or a vitualhost file) make sure that there are no conflicts.

To check your own site to see how it ranks for security this website gives a good overview and even gives you a grade to show exactly how secure it thinks your site is.

TLS Session Resumption (security/speed)

TLS Session Resumption is configured in the SSL config file on Apache web servers. By default, it should be enabled. Check whether this is enabled for your website at SSL Labs.

TLS Session Resumption is the default with Cloudflare Flexible SSL… link.

HTTPS/2 (speed)

If you use a CDN HTTP/2 may already be setup, or it may be an option that you can select. If you do not use a CDN you should check that your server is compatible with HTTP/2 like I did in this post.

Enabling HTTP/2 before HTTP/1.1 looks like this…
Protocols h2 http/1.1

HTTP/2 wiki… link

HSTS (speed/security)

HSTS wiki… link

How to use HSTS… link

On Debian you have to allow headers a2enmod headers then add this to the virtualhost file or the apache2.conf file…

15552000 seconds is 6 months.

# Use HTTP Strict Transport Security to force client to use secure connections only
Header always set Strict-Transport-Security "max-age=15552000; includeSubDomains;"
Header always set X-Frame-Options DENY
Header always set X-Content-Type-Options: nosniff

Then restart apache and test with SSL Labs.

Perfect Forward Secrecy (privacy/security)

Enabling Perfect Forward Secrecy (FPS)… link also link.

Use TLS (security)

Some cryptographic protocols are deprecated because they are able to be hacked and are thus insecure. Very old browsers may not use the TLS 1.1 or TLS 1.2, so you have to strike a compromise between security and accessibility. If you think a lot of your viewers may have older browsers you can keep SSL 2.0, SSL 3.0 and TLS 1.0 enabled, however, these are all insecure. Just allowing TLS 1.0+ is better. Just allowing TLS 1.1+ is much more secure. The risk of forcing too high a cryptographic protocol is that there may be people using browsers who do not support your current protocol. It’s a balancing act which comes down to your own decision about what is more important security or accessibility.

If you just wanted to allow TLS 1.1 and TLS 1.2 you would add this to your ssl.conf or apache2.conf (in Debian). Be careful that there are no conflicts between these two files and the individual virtualhost files…

SSLProtocol TLSv1.2 TLSv1.1

You can check which browsers use which cryptographic protocols at this link.

DNS CAA (security)

Specifying which domain(s) you want to issue certificates to your website also makes your website more secure. You do this through your domain registrar or CDN, where available.

On CloudFlare using their Flexible SSL you would need the following…

example.com. IN CAA 0 issue “comodoca.com”
example.com. IN CAA 0 issue “digicert.com”
example.com. IN CAA 0 issue “globalsign.com”
example.com. IN CAA 0 issuewild “comodoca.com”
example.com. IN CAA 0 issuewild “digicert.com”
example.com. IN CAA 0 issuewild “globalsign.com”

Taken from the CloudFlare blog

See also…

  • TLS False Start (Speed)
  • OCSP stapling

DNSSEC (security)

DNSSEC was designed to protect applications (and caching resolvers serving those applications) from using forged or manipulated DNS data, such as that created by DNS cache poisoning. All answers from DNSSEC protected zones are digitally signed. By checking the digital signature, a DNS resolver is able to check if the information is identical (i.e. unmodified and complete) to the information published by the zone owner and served on an authoritative DNS server. While protecting IP addresses is the immediate concern for many users, DNSSEC can protect any data published in the DNS, including text records (TXT) and mail exchange records (MX), and can be used to bootstrap other security systems that publish references to cryptographic certificates stored in the DNS such as Certificate Records (CERT records, RFC 4398), SSH fingerprints (SSHFP, RFC 4255), IPSec public keys (IPSECKEY, RFC 4025), and TLS Trust Anchors (TLSA, RFC 6698).

link

Caching (speed)

CDN

CDN. See also clouds.

Service Workers

Javascript creates a cache of the website on the viewer’s machine so that they can still view your website if they lose internet connection.

301 Redirects (SEO/speed)

Having pages that load quickly and not having duplicate content are big parts of SEO. 301 redirects tell search engines and browsers that they should be using a certain URL. For example, you should be redirecting from HTTP to HTTPS, and you can redirect from non-www to www or vice versa.

Canonical URLs (SEO)

With a canonical URL you are telling the search engine the exact URL it should be using. This is another method of ensuring that the pages are not going to be listed several times and appear to be duplicate content to search engines.

Schema.org (SEO)

You add schema to your HTML markup. This is mainly for search engines as it is not visible on the page… link

Have a Privacy Policy (privacy)

GDPR was launched in the EU in 2018. Data protection has been around for a long time, but the addition of GDPR means that websites who have European visitors should definitely consider having a privacy policy. This is all to do with collecting data on individuals, and how that data is used. It’s probably safer not to collect any data at all or as little as possible. I know useful stats-based websites that have closed as a direct result of GDPR, which is a shame. On the plus side, it gives Europeans more control over their data, which is probably a good thing.

Conclusion

Do an analysis of your website on SSL Labs and do a Google Audit. Both sets of results will give you a list of things that are good and things that are bad. You can seek to improve the things that are bad, some of which will be listed in this article. It is probably not possible for mere mortals to get 100% perfect, but a lot of these steps are both free and easy to implement so it’s worth trying to get as high a score as possible.

I have focussed on privacy, security, speed and SEO in this guide. There are considerations that have always been around or are not especially new for 2018 such as accessibility and having a mobile-friendly website which should also be looked at if you do not already.

Some of this is primarily aimed at mobile users. Google Audit and service workers, in particular, are very concerned with how the website behaves on mobile connections which may be intermittent. The benefit of working on these, along with having a mobile-friendly website is that you may well get more mobile visitors. Google wants to send mobile visitors to websites they’ll enjoy using, thus it is gradually increasing their importance on it’s mobile rankings.

Here are some tweaks you can make to various parts of Linux to make the whole experience a little easier and more intuitive. This guide is for Ubuntu and Debian flavours of Linux.

Change the Config Editor to Nano

The Debian default editor is Joe, if you do not know this text editor, change it to one you know, i.e. Nano. This command gives you all the options you have available so you can select the editor you wish to use…

sudo update-alternatives --config editor

Now, commands that use the default editor such as visudo will use your chosen editor.

Turn off passwords for a User

Once the config editor is nano, you can edit the sudoers file with the visudo command…

sudo visudo

One thing you might want to do is turn off passwords for yourself so that you do not have to keep typing the password when you run sudo commands. Add this like near the end of the file, after the “%sudo” group line…

myuser ALL=(ALL) NOPASSWD:ALL

Tweak Nano

Some changes I like to make when I first set up Linux are on Nano. I like to put smooth-scrolling on and to turn the keypad back to numeric. To do this, edit the nano config file by running this command as either root or with sudo…

sudo nano /etc/nanorc

Installing More Than One Version of PHP

To list all the versions of PHP that are installed you can type…

update-alternatives --list php

This then allows you to switch between them if you have more than one installed at any one time. Really this is more useful on a development machine, it is probably not needed on a web server.

The idea here is to complete the whole process as quickly as possible and have the minimum downtime to your blog. While we are doing a lot of work on the new blog quite early in this guide you should note that we do not make any changes on the old blog, and we do not change the DNS until right at the end of the process. So, right up until we’re ready to change over the blog is up and running on the old hosting. Here is more information on WordPress and the Command Line.

The assumption here is that both the old hosting and the new hosting can be accessed at the same time, although once you have everything from the old hosting you shouldn’t need that again. You probably want to keep the old one intact in case you have any issues with the new hosting or transferring the data. This is also assuming that Apache is used on both old and new hosting and that the new hosting is Debian or Ubuntu.

I recently moved four blogs to Digital Ocean hosting with Debian 9 using this method. Please be careful if you follow this guide as some of the steps may be different on different hosting. If I have left out any details or you have any improvements, please let me know.

Backup the Files from the Old Hosting

  1. Export the SQL for the entire database to a SQL file. If it is shared hosting you can export using the PHPmyAdmin, or if you have shell access you can export using the mysqldump command.
  2. Make sure you have the .htaccess and wp-config.php files.
  3. Download the theme you’re using if it’s a custom theme, a child theme or if you have changed anything about it.
  4. Make sure you have downloaded all the uploads directory and anywhere you have uploaded content to.
  5. You just need the plugin folder names for all the plugins you have installed. (If you use a standard theme from the WP repository you can also install it from just it’s folder name too)

Setting Up the New Hosting

Install Apache, PHP and MySQL/MariaDB

You only have to do this once per hosting. There is a guide to doing this here.

You do not need to install PHPmyAdmin on the new hosting as that is not used in this guide.

Install WordPress

Something like this from the command line…

cd /var/www
wget http://wordpress.org/latest.tar.gz
tar xfz latest.tar.gz

This makes a directory called “wordpress” with all the files inside it. Then, to rename the directory to the name of your website you can do this…

mv wordpress newblog.com

At this point upload the .htaccess and wp-config.php files into the website’s root.

Check the .htaccess

Take a look at the .htaccess file and make sure it looks ok. Make sure you always have the original unadulterated .htaccess file backed up somewhere so that if you make any changes to it while troubleshooting, you can always re-add stuff at a later stage once everything is fixed.

Re-create the MySQL Database

Now, you have the latest version of WordPress sat on your new hosting but it will not work because it is not connected to any database. To create a new database you would login to MySQL on the command line and do something like…

create database name_of_new_database;
exit

Now, you’ve exited from MySQL, so now you can import the SQL file from the command line. Upload the SQL to anywhere you like, perhaps put it with the other WordPress files in the website’s root directory. Then, to import the MySQL do something like…

mysql name_of_new_database < /var/www/newblog.com/backup.sql

That should import all the tables. Check that everything is in place by logging in to MySQL and doing something like...

use name_of_new_database;
show tables;

If everything is there, create your blog user with a password and grant access. The easiest/quickest way would be to use the existing username and password from the wp-config.php file, but you can change the username and password here as long as you update the wp-config.php afterwards...

CREATE USER 'newuser'@'localhost' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON name_of_new_database . * TO 'newuser'@'localhost';

After changing the privileges you'll also want to flush them for them to work...

FLUSH PRIVILEGES;

Re-add Uploads

At this point your blog should still be working on your old hosting, you have not changed anything on the old hosting. On the new hosting the WordPress files are in place and the new database is able to be read by the WordPress files.

Now is as good a time as any to upload the uploads directory, any custom plugins that are not in the WordPress repository and the theme you're using (unless the theme is on the WordPress repository). You can do this later but it's better to do it now before you forget.

Re-add Plugins

This stage has to be done after the blog is connected to the database. Using WP-CLI to re-add the plugins quickly while the website is still working on the old hosting. SFTP or FTP might take a long time, so this is a quick method. If you waited until you switched over the DNS you could login to the admin dashboard and re-add the plugins from there, however, certain plugins might be required to login to the dashboard (e.g. if you're using Cloudflare's flexible SSL) and why wait until then when you can easily get them all added beforehand.

Install WP-CLI...

curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar
chmod +x wp-cli.phar
sudo mv wp-cli.phar /usr/local/bin/wp

Now, once WP-CLI is installed, you can navigate to your blog's root directory on the new hosting if you're not already there. Then have the old hosting open on SFTP/FTP and navigate to the plugins folder. The grab each plugin folder name and do something like this...

cd /var/www/newblog.com
wp plugin install wp-plugin-1 wp-plugin-2 wp-plugin-3 wp-plugin-4 wp-plugin-5 --activate

So, instead of "wp-plugin-1", you might have "jetpack", etc. If you are doing this as root you have to use the --allow-root flag then you should change the owner of all the files to a different user by doing...

chown -R differentuser:differentuser ../newblog.com

If you have not re-uploaded your theme you can also do this using WP-CLI by doing something like...

wp theme install twentysixteen --activate

It's probably a good idea to check everything is owned by differentuser, or just change the owner:group to differentuser after you're finished using WP-CLI.

Set Up the Virtual Host file and Add Site

At this stage, the blog on the old hosting is still working normally. On the new hosting, WordPress should be operational but you haven't changed the DNS over. Hopefully, if the database was copied across correctly and the .htaccess and wp-config.php files are ok you should be able to change over now with the minimum of disruption. However, you may wish to test the blog out on the new hosting to make sure it works before you change the DNS. You can do this if you have a unique IP for your hosting and the website is the default.

To make the new website the default so you can access it with the IP address, go to the sites-available directory and modify the 000-default.conf file...

cd /etc/apache2/sites-available
nano 000-default.conf

You may not have to change anything here apart from you want to point the document root to your blog's root to test it out using your unique IP...

DocumentRoot /var/www/newblog.com

Then, to make sure your .htaccess will work you may need to add something like this...

<Directory /var/www/newblog.com>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Require all granted
Order allow,deny
allow from all
</Directory>

The default may already be enabled, but if not you would do a...

a2ensite 000-default.conf
service apache2 restart

Now, you can find your IP address by running this command...

hostname -I

Copy-paste the IP address into the browser and your blog should be there. Clicking the links of the posts/pages will take you back to your old hosting. To check that the individual post pages work you'll have to modify the URLs so that they look something like http://123.123.13.13/name-of-post in the browser (i.e. swap the domain name with the new IP address).

If everything is working, copy the file and complete if for your new domain...

cp 000-default.conf newblog.com.conf
nano newblog.com.conf

In addition to the changes we made before, you'll also now want to add your domain name to the virtual host like...

ServerName newblog.com
ServerAlias www.newblog.com

You can also specify a unique error and access file so that you can see exactly what is happening with this one blog if there are any problems.

Now you can save, exit and enable the site...

a2ensite newblog.com.conf
service apache2 restart

Point the DNS at the New Hosting

If you're using something like Cloudflare the changeover might be very quick, otherwise you'll have to wait it out. Generally speaking, you'd just create change the A record to the new IP address however, different hosting works different ways.

After your DNS has propagated, the website works fine, and you can login to the admin dashboard and post as normal you should change the permissions of the .htaccess and wp-config.php to 0444.

Troubleshooting

This method is designed to be as quick as possible. The thing that will take the longest is downloading and uploading the uploads and theme. While this is happening you can either take a break or you can always be working on the other stuff.

If there are any problems with logging into the admin dashboard you can always de-activate the plugins using WP-CLI. To use any WP-CLI commands you must always be in the WordPress site's root directory that you want to work on. The same hosting can have more than one blog, so the location you're in on the command line makes a big difference...

wp plugin deactivate wp-plugin-1

where "wp-plugin-1" is the folder name of a specific plugin that is installed or, to deactivate them all quickly...

wp plugin deactivate --all

If deactivating the plugins does not help, take a look at the access and error logs. If there are any issues highlighted in the logs you can see which file(s) they are related to and then take another look at the .htaccess. The main things to check if there are problems... permissions, owners, .htaccess, plugins, wp-config.php.

After making changes to the .htaccess or plugins you may need to clear your cache to see whether the changes have worked. You may also need to purge everything if you are using a CDN like CloudFlare.

Some plugins need to write to your hosting, so if there are any problems with this you'll get errors, especially if a plugin wants to write to your .htaccess file after you've changed it to 0444. You may have to change the permissions on the .htaccess back to 0755 briefly, then change them back to 0444 afterward. Other plugins may have different problems writing to the uploads directory so making sure that everything is owned by www-data should fix this. Alternatively, it's probably a better idea not to have everything owned by www-data so giving your non-root user ownership of everything will mean that you can update everything with WP-CLI, then you'll have to tweak the uploads folder if you wish to upload using the admin dashboard.

Conclusion

Always consider security with WordPress blogs, especially where there is more than one blog on the same hosting.

I chose Digital Ocean because they were recommended to me. They seem good so far, if you'd like to try them too you can click the link to get $10 free credit with Digital Ocean.

These days there are often WordPress apps on the control panel of hosting providers that enable you to install WordPress easily with a click. You can install and run WordPress entirely by FTP/SFTP and the admin dashboard, but this can be slow. However, if you are comfortable with the Linux command line, you can install WordPress, themes, and plugins at a lightning fast speed. This is particularly useful if you are moving from one hosting company to another and you have more than one WordPress blog to move at the same time.

Installing WordPress from the Command Line

Sometimes SFTPing to a host can be very slow. WordPress is made up of so many individual files that uploading everything when installing or updating can take a very long time.

This method is very quick and snappy way to download the latest version of WordPress when you are initially installing your blog…

wget http://wordpress.org/latest.tar.gz
tar xfz latest.tar.gz

WP CLI

These commands install WP-CLI…

curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar
chmod +x wp-cli.phar
sudo mv wp-cli.phar /usr/local/bin/wp

You can do this from anywhere because the three lines are 1) download, 2) change permissions and 3) move.

If you install WP-CLI before installing WordPress you can even install WordPress with WP-CLI, see wp core install

wp core download –locale=nl_NL
wp core install –url=example.com –title=Example –admin_user=supervisor –admin_password=strongpassword –[email protected]
wp core version

The 3 lines are 1) download, 2) install and 3) check the version of WordPress that is installed.

Installing Plugins

Assuming you have already set up the WordPress blog (wp-config.php and database), you can then install and activate multiple plugins very easily like this…

wp plugin install wordpress-seo jetpack post-volume-stats add-target-fixer –activate

Another use might be that you can easily de-activate plugins if you ever have problems logging in…

wp plugin deactivate plugin-name

Search and Replace

One thing that you cannot do with SFTP and the WordPress admin dashboard is a sitewide search and replace. There are probably plugins that will help you do this, but it is made very easy with WP-CLI…

wp search-replace oldstring newstring

The --dry-run flag also helps you to see what you’re about to change before you change it.

Updating WordPress

Perhaps the most useful function of WP-CLI for me is the ability to update WordPress very easily and quickly. This is all it takes…

wp core update

Summary

For more info see the docs at WP-CLI.