Here’s what you do to set up an AWS Elastic Beanstalk instance and update it through GIT. So, you’ll need the correct access on your AWS account and here we’re going to use Elastic Beanstalk (EB), RDS, IAM, EB CLI, Github and GIT.

Setting up Laravel on Elastic Beanstalk

This setup is for an Ubuntu machine. Most of this comes from this url

  • Create a new private repository on Github (or wherever you want to store your GIT repo). It should be private because you will be adding your .env file.
  • Clone the repo on your local machine (in this case an Ubuntu Desktop).
  • In the directory, you’ve cloned the repo to you install a fresh version of Laravel. You may need to install it into another blank directory then copy the files across as the directory you want to install Laravel into isn’t empty.
  • Check that Laravel is working locally.
  • Once you have a working version of Laravel you can save the contents of the Laravel directory minus the vendor folder to a zip file using the command…
  • zip ../laravel-default.zip -r * .[^.]* -x "vendor/*"
  • Create a new EB instance and use the default application to begin with. Your EB url should now give you a holding page when you go to it in a browser.
  • Now to put your Laravel project onto the EB instance you click the “Upload and Deploy” button and select the “laravel-default.zip” you made previously.
  • Now, when you go to the EB url there may be an error, so instead put the “public” directory in the url, then it should work. To fix this, go to “Configuration” > “Software” and the “document root” is the first option on the form, make it “/public”.

Connecting Elastic Beanstalk to an RDS database

At this point, you should have a working version of Laravel on your EB that you have uploaded manually and that isn’t connected to a database.

  • To connect to a database modify “Database” in “Configuration”. This is where you can make an RDS instance for your website.
  • Once the RDS is made you’ll still need to allow access to the EB instance and your local machine. Go to the RDS instance and under the “Security” tab there should be a “VPC security groups” heading, click the url below it.
  • Having clicked on the link you should now see some tabs that include “Inbound” and “Outbound”. Click inbound and add “MySQL/Aurora” for your local IP this makes a rule for port 3306.
  • Also, to allow the EB instance to access the RDS DB you’ll need to add it’s security group. In the “source” field start typing “sg-” to get a list of all the available security groups and select the appropriate one then “save”.
  • You can now edit the “.env” file with your RDS information and it should be able to connect locally and from your EB instance.
  • Test out your new database by running the migration locally, if it works you can assume it should work from EB so update by making another zip with the updated files then “Upload and Deploy”.

Your AWS EB instance should now be able to chat with the database freely but you’re still updating with the zip files.

Deploying from GIT to your Elastic Beanstalk instance

Most of this comes from this url and this url

  • Update your .gitignore file then add everything you need from your project directory to the empty repo you set up on github at the start. Once again, the vendor file should be missing along with any junk from your IDE.
  • Install the latest version of Python 3.
  • Run this command to make sure the default value of Python is the one you want: python3 --version . If it isn’t you’ll have to follow something like this url. NB: python and python3 are different, make sure to follow the advice from the url but use python3 instead of python.
  • Once EB CLI is installed all you need to do is run eb init from your project directory to set up your GIT repo with EB. You’ll need to get an ID/secret from a user in IAM. Once you have created/clicked on the user you would go to “Security credentials” and create an access key. You’ll have to remember this info as you won’t be able to access the secret more than once.
  • Next, with EB successfully initiated all there is left to do is deploy. So, commit your current working Laravel site (and push to Github if you like). EB will be using the current version you have commited in your current branch. To do this run the command eb deploy --staged

Believe it or not, that should have deployed to your instance. Huzzah!!

Troubleshooting

If the Ubuntu machine you are developing on is a new build or has not been used recently for development you’ll need to update and upgrade. When installing Laravel you’ll need to install all the PHP modules it needs (in my case it was “mbstring” and “dom”), you’ll also need to make sure “mysql-server” is installed locally.

Summary

This process is not too bad at all. There are different ways to update an EB instance, e.g. using AWS CodePipeline, but if you prefer Github over AWS’s CodeCommit this method is very straightforward to setup. Using this method on a Production website as I’ve described it here would mean testing locally then only deploying once the work was tested locally. You could also set up a dev server and deploy there first for further testing before deploying to the live website.

In this post, we looked at some ways to tighten up security and increase the speed of websites in modern times.

How to Implement Security HTTP Headers to Prevent Vulnerabilities? talks about some of the headers that should be modified from their defaults for increased security.

The list of headers they give are…

  • X-XSS-Protection
  • HTTP Strict Transport Security
  • X-Frame-Options
  • X-Content-Type-Options
  • HTTP Public Key Pinning
  • Content Security Policy
  • X-Permitted-Cross-Domain-Policies
  • Referrer Policy
  • Expect-CT

One of the things mentioned in my Best Practices for Websites in 2018 article was HTTP Strict Transport Security (HSTS). Using a CDN, like CloudFlare, HSTS can be included very easily in the free version. However, some of the other headers are only able to be added on the Enterprise version.

Other things it can be helpful to remove from the headers are the exact versions of Apache and PHP. Although, to be fair, there are only a finite number of web servers and programming languages so protection by obscurity is fairly limited.

Updating Headers in Apache and Ubuntu

First of all, a standard fresh install of Ubuntu might not have the headers module installed so add it by…

sudo a2enmod headers
sudo service apache2 restart

Then, most articles say that you should add the headers to the httpd.conf file. This file does not exist in a fresh install of Ubuntu so you have to make it in the location /etc/apache2/httpd.conf then include it in the apache2.conf like so…

Include httpd.conf

Once this is done you can start adding headers to it.

You should check whether you have a httpd.conf or not before you make one. Bitnami creates a httpd.conf that is already pre-populated with a lot of lines of code. This kind of pre-setup is the whole reason that Bitnami exists.

Removing the Version of Apache from the HTTP Headers

Web servers (Apache, Nginx, IIS) typically do not want you to remove them from the headers because it is a way of showing the world how popular they are. Like social media for web servers. So, the method of removing them can be relatively tricky.

One alternative is to use cloudFlare which gives the Server variable the value of “cloudflare”… Easy!

Alternatively, another easy fix is to remove the version number from Apache and just leave the word “Apache” visible.

With headers enabled and httpd.conf included in the apache2.conf you can add the lines…

ServerTokens Prod
ServerSignature Off

After restarting Apache the version of Apache should now be gone. You’re telling Apache that the website is in production so turn off the signatures.

Similarly, adding the following line will remove the ability to do a telnet trace of the website, although this still tells the person you’re using Apache.

TraceEnable off

Remove the Version of PHP from the HTTP Headers

The default for PHP is to not show the version of PHP in the headers, however, I found recently that in a Bitnami install it was actually shown by default.

You can turn this off in the php.ini…

expose_php = Off

As this is a PHP setting it will be the same in Nginx, IIS, etc.

After you have done this you’ll need to restart PHP, like this…

sudo service php5-fpm restart

Updating HTTP Response Headers in Apache

The rest of the headers listed above can be updated in the httpd.conf. Here are a few standard ones that do not need any modifications…

Header always set Strict-Transport-Security "max-age=15552000; includeSubDomains;"
Header always set X-Frame-Options DENY
Header set Referrer-Policy "no-referrer"
Header set X-Permitted-Cross-Domain-Policies "none"
Header set X-XSS-Protection "1; mode=block"
Header always set X-Content-Type-Options "nosniff"
Header always set Expect-CT "enforce, max-age=300, report-uri='https://www.repoting-website.com/'"

The modifications to the httpd.conf will be very similar but different to the lines that would be added to the nginx.conf for Nginx.

Testing the Security of HTTP Headers

The first way to test your headers would be to inspect them in your browser. For Chrome, you would “inspect” then go to the “Network” tab. If the Network tab is empty, reload the page. Once the list of items is populated you can click on the main website which should be at the top, then on the right, there should be a “Headers” tab that lists all the headers.

SecurityHeaders is a great website for testing the security of your headers. The website completely ignores web server version and programming language version as you could argue that removing them does not offer much protection against an attack. Instead, it focuses on the instructions your website is giving browsers from its headers.

Another useful link for updating your HTTP headers that gives examples for different web servers is Hardening your HTTP response headers.

When you first build a website the database may seem fast. All your queries may get executed quickly. But, after a while when the database is much larger the queries may start taking longer. If you haven’t already, it may be time to optimize the MySQL queries!

This is a quick guide to optimizing MySQL queries. It’s more general and theoretical than being a step-by-step tutorial.

In the Beginning

Creating your tables properly in the first place will save you headaches down the road. In the beginning, when you are creating your tables make sure that all the columns are of the right type (INT, VARCHAR, TEXT, ENUM) and that they have a size where possible. For some columns, it’s better to use varchar than text because varchar is limited to 255 characters while text is pretty much unlimited. However, you’ll probably need some text columns too so it’s just a case of using them in a way that should be relatively painless as the site grows.

Explain

Put EXPLAIN before your MySQL query to find out what is going on. It will give you various useful information such as the type of query MySQL is running on each table in the query and the number of rows it is searching through.

The possible types are (good to bad)…

  1. const/eq_ref
  2. ref/range
  3. index
  4. all

It’s better to have the type “index” than the type “all”, but “eq_ref” is better still. Having a type of “all” is the worst thing for your query.

Select Explicitly

Select explicitly, don’t use SELECT *. The worst thing to do is return everything when you have a large number of columns in the table and you only need one or two columns. Only selecting what you need will make a more manageable MySQL query.

If you are doing a large search and returning a lot of rows, is there a better or quicker way to do what you are trying to do. One possibility might be to only return the id of the rows then if you need more information you can get it later. That’s just a possibility and would depend on the query and what you needed to do.

Remove Functions

In some RDS databases you are able to use functions and still be optimized, not in MySQL. Using MySQL functions in the query means that an index won’t work.

This is an example of using the year() function. This is bad…

SELECT id FROM blog WHERE YEAR(date)='2016' ORDER BY id DESC

It’s better to use BETWEEN for dates…

SELECT id FROM blog WHERE date BETWEEN 2016-01-01 AND 2017-01-01 ORDER BY id DESC

or

SELECT id FROM blog WHERE date >= 2016-01-01 AND date < 2017-01-01 ORDER BY id DESC

Indexing

So, you’re selecting only what you want from the query and not using any functions, it’s probably time to index!

The aim is to not have any “all” types for any of the tables when you run the EXPLAIN on your query. You might also be able to get the number of rows searched down, but that might not be possible.

Simply, you put everything from a table in a query into the index.

The order of the indexed columns matters!

Run the query you are trying to optimize in MySQL or PHPmyAdmin making a note of how long the query takes to execute. Add the indexes, run an explain, then tweak the indexes, or if they look ok, run the actual query. When you run the query after optimizing it should be quicker, or at least not worse. If the query is worse your index may have columns in the wrong order.

One thing to remember is that making an index for one query might speed up that query but if another query uses the same index it may actually slow that other query down.

Also, indexing properly should speed up SELECT queries but it will have the opposite effect on UPDATE and INSERT statements.

Limit

LIMIT doesn’t necessarily mean you only search that number of records. You may still be searching all the records then discarding the rest. Or, you may only be searching that number of records when you need to search the entire database. Use LIMIT with care!

This is a simple step-by-step guide to making a PHP composer package that can be listed publicly on packagist.org for anyone in the world to install. We’ll be using Github to update the package. We’ll also add some testing with PHPUnit.

Creating a Composer Package

The basic steps to creating a new composer package are as follows.

  1. Create a Github Repo for the project
  2. Clone the Github repo locally
  3. composer init
  4. composer install
  5. Write the PHP composer package, put the PHP into a src directory.
  6. Commit and push to Github
  7. Give the package a version by using git tag 0.0.1 then git push --tags
  8. Login to Packagist and add the new Github repo to your packagist account.
  9. Make sure all the info packagist.org needs is in your composer.json

After following these steps, your package should now be published on packagist. Any future changes you make should be pushed to Github and make a new tag like this… git tag 0.0.2 and git push --tags. You can list all the tags with git tag. Every time you update in this way, Github will get updated, and packagist will also get updated automatically.

Our class is called Bar, so our main PHP file has to be Bar.php (upper/lowercase matters!). We’ll put it in a directory called “src”…

<?php 
namespace Foo;
class Bar {
    public function helloworld(){
        return 'Hello, World!';
    }
}

Here is a sample composer.json file. Our namespace is “Foo” so we say that Foo is in the src directory in the composer.json…

{
    "name": "foo/bar",
    "license": "MIT",
    "require": {
        "php": "^7.0"
    },
    "require-dev": {
        "phpunit/phpunit": "^5.7"
    },
    "autoload": {
        "psr-4": {
            "Foo\\": "src/"
        }
    }
}

To use the package, import it by copy/pasting the command line instructions from Packagist. It’ll be something like this… composer require foo/bar. Then, once the package has been installed into the vendor directory, you can start using it, like this, for example…

<?php

require_once 'vendor/autoload.php';

$test = new Foo\Bar();

$test->helloworld();

Updating the Package

For testing purposes. Each time you update you need to make sure the latest version is downloaded from Packagist.

Make sure the composer.json of the project you’re inporting the package into has a composer.json like this. You’ll need to make sure the package you’re testing is greater than or equals to >= instead of ^which specifies an exact version.

{
    "name": "neil/test",
    "authors": [
        {
            "name": "neil",
            "email": "[email protected]"
        }
    ],
    "require": {
        "foo/bar": ">=0.4.3",
        "phpunit/phpunit": "^6.5"
    }
}

But, even then composer update may still not do anything when you update the package. You may need to composer clearcache first, then update composer. Also, sometimes there is a short lag in Packagist updating so don’t get too worried if it doesn’t update straight away first time.

Testing the Package

To add testing you might want to use something like PHPUnit or PHPSpec. This is using PHPUnit 6.5 which runs with PHP 7.0…

composer require --dev phpunit/phpunit

Make a directory called tests and make a file called BarTest.php…

<?php
declare(strict_types=1);

use PHPUnit\Framework\TestCase;
use Foo\Bar;

final class BarTest extends TestCase
{

    public function testOutputsExpectedTestString()
    {
        $this->assertEquals(
            'Hello, World!',
            Bar::helloworld()
        );
    }
}

Then, making sure you’re using the “dev” packages you can run the test in the command line like so…

vendor/bin/phpunit --bootstrap vendor/foo/bar/src/Bar.php vendor/foo/bar/tests/BarTest

Unit tests will only work on public functions, not private functions.

Troubleshooting

The name of the class must be exactly the same as the filename, and vice versa. If your class is called SomeClass, the file must be SomeClass.php.

Errors can also come from not using git tag to create a version or not having all the info packagist needs in the composer.json.

To create a bash script that will work only for your user, you can store the bash files in your user’s home directory. The standard place to put them would be a folder called bin. Create it if it does not exist, then create the file. The name of the file is the name of the command you want to type to run it. So, if I want to call my command “commandname”, I would do…

mkdir ~/bin
sudo nano ~/bin/commandname

Then, create the script with the shebang! at the top…

#!/bin/bash

# Update, upgrade, then restart
apt-get -y update
apt-get -y upgrade
apt-get autoremove
service apache2 restart

# update WordPress through WP-CLI
cd /var/www/html
wp core update
wp plugin update --all

Now, make the file executable…

sudo chmod +x  ~/bin/commandname 

Then, to run the file from any directory, you’ll have to update your user’s .profile file…

sudo nano ~/.profile

Adding the following to ~/.profile tells linux that there are executable scripts in the ~/bin directory…

PATH=$PATH:~/bin
PATH=~/bin:$PATH

Now you should be able to run the command, commandname, from any directory.

You can run this manually from the command line or you can create a cron to run it at regular intervals.

1 * * * * /bin/bash -c "~/bin/commandname"

Then reload the cron with…

sudo service cron reload

You can monitor the cron log in real-time with tail…

tail -f /var/log/syslog

Trying to connect to use AWS S3 for the first time can be confusing. Here is a quick guide to roughly what has to happen.

Basic Steps to Set up S3

The only two sections you need in the AWS console for this are “S3” and “IAM”…

  • Create a S3 bucket.
  • Make a S3 bucket policy.
  • Create a policy to access the bucket in IAM.
  • Create a “programmatic-only” user for the bucket and attach this policy to it (IAM).

Store the info for your user (the secret will not be displayed a second time). You’ll use this to connect with the S3 instance in your code.

Easy, huh?

AWS is pretty good at telling you when you’re about to do something stupid during this process. There are plenty of warning signs on the screen if you make anything public. AWS is all about security and particularly dislikes us making things public that should not be public.

The bucket policy and user policy are both in JSON format.

There is a website to help you make the correct JSON but the form itself is pretty good at telling you if there is an error in your policy or you’ve made your bucket public. An example Bucket policy might look like…

{
    "Version": "2012-10-17",
    "Id": "Policy123465789",
    "Statement": [
        {
            "Sid": "Allow ALL access to the bucket by one user",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111111111111:user/myusername"
            },
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::my-bucket-name",
                "arn:aws:s3:::my-bucket-name/*"
            ]
        }
    ]
}

The user policy (IAM) is created by a wizard but then if you want to edit the policy you see that it is also a piece of JSON. Each action is listed in the JSON so it would be very easy indeed to simply delete the actions in the JSON, or just unselect a specific action when you’re creating the policy.

AWS S3 Docs

The AWS S3 docs for PHP are pretty extensive but getting to the exact thing you need is not always straightforward. I found that I was making a lot of google searches in order to find what I was looking for because the navigation wasn’t that great.

This page on the AWS SDK for PHP S3StreamWrapper was particularly well laid out and useful. The main AWS SDK for PHP docs are pretty extensive as long as you know the name of the function you want to use.

Also, if you do a search for instructions on how to do a certain thing, like connect to S3, make sure the article is fairly up-to-date. Earlier versions allow you to connect in different ways that may not always work with the current version of the AWS SDK. Other things may be similar/the same.

As with most programming, there are often multiple ways to complete the same task. For example, in the example below there are at least two ways to output the JPEGs to the screen via PHP (see comments in the code).

You can use this PHP code in any AWS instance that you can code PHP in. I added this to my Lightsail instance but it would also work on EC2 or with any other non-AWS hosting…

S3 Gallery and JPEG Displayer Example

This is a simple code to turn every file in every bucket into a gallery. This is the code to create the list of “thumbs”. In this case, the thumbs are the large image but made smaller with CSS.

<?php

// Require the Composer autoloader.
require '../vendor/autoload.php';

use Aws\S3\S3Client;

try {

    // Instantiate the S3 client with your AWS credentials
    $s3Client = S3Client::factory(array(
        'version' => 'latest',
        'region'  => 'eu-west-2',
        'credentials' => array(
            'key'    => 'unique_string', // From AWS IAM user
            'secret' => 'unique_secret_string' // From AWS IAM user
        )
    ));

    //Listing all S3 Bucket
    $buckets = $s3Client->listBuckets();
    foreach ($buckets['Buckets'] as $bucket) {
        $bucket = $bucket['Name'];
        $objects = $s3Client->getIterator('ListObjects', array(
            "Bucket" => $bucket
        ));

        // Show each one 200x200 and link to full-size file...
        foreach ($objects as $myobject) {
            echo "<p><a href=\"/showitem.php?item={$myobject['Key']}\"><img src=\"/showitem.php?item={$myobject['Key']}\" style=\"height: 200px; width: 200px;\"></a></p>\n";
        } // end foreach
    } // end foreach


}catch(Exception $e) {
    // Only show this for testing purposes...
   exit($e->getMessage());
}

Then, to display the files from the S3 bucket, we do not want to have the AWS S3 URL in the browser, so we’re going to display the images through PHP with the showitem.php file. Here is the code for that file, it’s a very simple image displayer

<?php

// Require the Composer autoloader.
require '../vendor/autoload.php';

use Aws\S3\S3Client;

$bucket = "my-bucket-name";

try {

    // Instantiate the S3 client with your AWS credentials
    $s3Client = S3Client::factory(array(
        'version' => 'latest',
        'region'  => 'eu-west-2',
        'credentials' => array(
            'key'    => 'unique_string', // From AWS IAM user
            'secret' => 'unique_secret_string' // From AWS IAM user
        )
    ));

    $s3Client->registerStreamWrapper();

    if(isset($_GET['item'])){
        $keyname= filter_var($_GET['item'], FILTER_SANITIZE_STRING);

        // Get the object.
        $result = $s3Client->getObject([
            'Bucket' => $bucket,
            'Key'    => $keyname
        ]);

        // Display the object in the browser.
        $type = $result['ContentType'];
        $size = $result["ContentLength"];
        header('Content-Type:'.$type);
        header('Content-Length: ' . $size);
        echo $result['Body'];

        // Alternatively, get file contents from S3 Bucket like this...
        // $data = file_get_contents('s3://'.$bucket.'/'.$keyname);
        // echo $data;
    }

}catch(Exception $e) {
    // Only show this for testing purposes...
    exit($e->getMessage());
}

When you add a file to S3, you’re probably either doing it programmatically, or you’re dragging and dropping into the S3 Browser. When you want to use the file, you’ll find that you can get quite a lot of information from the getObject() method that is listed in the docs. I wanted to find the exact response for content length, so you would just look up getObject and see what is returned.

This is a very quick example. There are some pretty major things here that would be better doing them a different way. For example, when we connect to S3 with the factory() method, we should probably use the .aws/credentials file to make one or more profiles so that our secret info isn’t listed in the PHP of public part of the website.

Also, S3 can be accessed with the AWS CLI

I had a PHP website that was the last man standing on some shared hosting that was slow that didn’t have SSH access. I decided that since I would be moving the website anyway, why not try something different with it. The website in question is PHP/PDO/MySQL with no framework.

Having already tried AWS Lightsail App+OS, I wanted to experiment with the Lightsail “OS Only” option. What better thing to do than to install the Nginx web server on it? Starting from scratch with an OS Only box I would be able to take a look at another side of AWS Lightsail (without Bitnami) and also learn about Nginx and using a LEMP stack.

I created a new instance of Lightsail with Ubuntu 18.04 LTS. It was exactly the same as most VPS that only come with Linux installed. After installing Nginx on Lightsail, the version of PHP I got from by installing PHP-FPM was PHP 7.2.10.

Link… https://www.digitalocean.com/community/tutorials/how-to-migrate-from-an-apache-web-server-to-nginx-on-an-ubuntu-vps

But, we’ve jumped ahead. Let’s look at Linux…

First Things First: Linux

The first thing to do is to update and upgrade Linux. Sudo was already installed, so it’s straight into…

sudo apt-get update
sudo apt-get upgrade

I believe that while Debian is a pretty bare bones install, Ubuntu comes with a lot of stuff pre-installed such as sudo and nano, which is very convenient.

Nginx

With Linux updated, we can get the next part of the LEMP stack installed. That would be Nginx (Engine-X). This is the only part of the stack that I’ve not had much experience of in the past so was most interesting for me. I was expecting it to be slightly more different to Apache than it was. Apart from the different style of the config file...

sudo apt-get install nginx

Now, magically the unique domain you have in your Lightsail console will work in your browser, giving you a page like…

Welcome to nginx!
If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

Doing some prep for when we change the DNS and point the domain name at this hosting, we should also make a config file…

The Nginx Config File

The config file should be the name of the site and should be created in /etc/nginx/sites-available. Copy the info from the default file across to the new site, changing the domain name. You can then set up a symlink to “sites-enabled”…

sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/

https://www.digitalocean.com/community/tutorials/how-to-install-linux-nginx-mysql-php-lemp-stack-ubuntu-18-04

sudo unlink /etc/nginx/sites-enabled/default

Test the new Nginx config, then restart to load the new settings with…

sudo nginx -t
sudo systemctl reload nginx

Change permissions of the /var/www/htdocs/ folder then upload files. Convert htaccess…

https://winginx.com/en/htaccess

Something like this will eventually need the * removing for nginx…

<Files *.inc>
Deny From All
</Files>

becomes…

 location ~ .inc {
deny all;
}

# I also had to convert the "break" to "last" on the mod_rewrite...

rewrite ^/(.*)/$ /item.php?item=$1 last;

Then, add the code to the example.com config file and test it again. Any duplicates will need commenting out with #. In the config file \.php causes an error, remove the slash.

PHP

Now, we can install PHP and MySQL to complete our LEMP stack…

sudo apt install php-fpm php-mysql 

Luckily, the site I was moving had pretty modern PHP with nothing that needed fixing at all. Uploaded a file with the function phpinfo() on it to test that PHP is working. All good!

Nginx Default Log File

index.php not working! Look at the log file…

tail /var/log/nginx/error.log

Yes, the PHP was fine, it turns out that PDO was unhappy that I hadn’t added the database yet…

MariaDB

Finish off getting MariaDB installed, then check it’s working…

sudo apt install mariadb-client-core-10.1
sudo apt install mariadb-server
sudo systemctl status mariadb

I was getting the error, below…

ERROR 2002 (HY000): Can’t connect to local MySQL server through socket ‘/var/run/mysqld/mysqld.sock’ (2 “No such file or directory”)

So, I did a “locate” for the my.cnf file and the “mysqld.sock” file and added this to the mysql/mariadb config file, my.cnf…

socket  = /var/run/mysqld/mysqld.sock

Then…

sudo service mysql restart

Login for the first time with sudo…

sudo mariadb -uroot

Now you can create the database and database user for the app.

https://stackoverflow.com/questions/5376427/cant-connect-to-local-mysql-server-through-socket-var-mysql-mysql-sock-38

SSL Encryption

Pointed domain name at the public IP address with CloudFlare. Server-side SSL encryption to follow.

https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-18-04

Simple Password Authentication

https://www.tecmint.com/password-protect-web-directories-in-nginx/
sudo apt install apache2-utils
sudo htpasswd -c /etc/nginx/.htpasswd username

Then put the following in the “location” you waant to be protected in the config file…

auth_basic "Administrator Login";
auth_basic_user_file /etc/nginx/.htpasswd;

Then, test and restart NginX.

Force or Remove WWW

For some sites I prefer to keep the www in, so I did the opposite of this on this occasion…

server {
server_name www.example.com;
return 301 $scheme://example.com$request_uri;
}
server {
server_name example.com;
# […]
}

https://stackoverflow.com/questions/11323735/nginx-remove-www-and-respond-to-both

index.php Downloads instead of Displaying

Sometimes the index.php, or ay PHP files can start downloading instead of displaying normally in the browser. The fix for this is to pass the PHP scripts to the FastCGI server. Make sure you use the correct filepath, below is for PHP 7.2 but it will be different for different versions of PHP…

server {
listen 80;
listen [::]:80;
root /var/www/myApp;
index index.php index.html index.htm;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/run/php/php7.2-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}

Debug Mode

To show extra info in the error.log file add the word “debug” to the error_log statement…

error_log  /etc/nginx/error.log debug;

Example nginx.conf file

This file is taken from here. Shows SSL encryption…

server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name www.example.com;
ssl on;
ssl_certificate /root/certs/APPNAME/APPNAME_nl.chained.crt;
ssl_certificate_key /root/certs/APPNAME/ssl.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:20m;
ssl_session_tickets off;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK';
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /root/certs/APPNAME/APPNAME_nl.chained.crt;
root /srv/users/serverpilot/apps/APPNAME/public;
access_log /srv/users/serverpilot/log/APPNAME/APPNAME_nginx.access.log main;
error_log /srv/users/serverpilot/log/APPNAME/APPNAME_nginx.error.log;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-SSL on;
proxy_set_header X-Forwarded-Proto $scheme;
include /etc/nginx-sp/vhosts.d/APPNAMEd/.nonssl_conf; include /etc/nginx-sp/vhosts.d/APPNAME.d/.conf;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name example.com;
ssl on;
ssl_certificate /root/certs/APPNAME/APPNAME_nl.chained.crt;
ssl_certificate_key /root/certs/APPNAME/ssl.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:20m;
ssl_session_tickets off;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK';
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /root/certs/APPNAME/APPNAME_nl.chained.crt;
root /srv/users/serverpilot/apps/APPNAME/public;
access_log /srv/users/serverpilot/log/APPNAME/APPNAME_nginx.access.log main;
error_log /srv/users/serverpilot/log/APPNAME/APPNAME_nginx.error.log;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-SSL on;
proxy_set_header X-Forwarded-Proto $scheme;
include /etc/nginx-sp/vhosts.d/APPNAME.d/.nonssl_conf; include /etc/nginx-sp/vhosts.d/APPNAME.d/.conf;
}

Conclusion

Website moved and working as it did before, but possibly slightly faster.

In just the short time I have been using it Nginx is already growing on me. I like the simplicity. I like that it is quite similar to Apache in some ways. And, I like the fact that it should be faster than Apache.

Not being able to use .htaccess files, and the Nginx config being different to Apache virtualhost files was not bad at all. A combination of using a htaccess-to-Nginx converter and Google/Stackoverflow has already taught me a lot of how to replicate what I might do with an .htaccess or virtualhost file with Nginx.

As expected, the “OS Only” version of AWS Lightsail was much more like a standard VPS and there was nothing too hard in setting it up and moving a site across and onto Nginx.

AWS Lightsail is the closest thing AWS has to shared hosting. It is their quick, easy and inexpensive off-the-shelf hosting that has SSH access and many of the benefits of using a more expensive EC2 instance.

It is an affordable entry in cloud computing, but is it any good?

I decided to try out Amazon’s cheapest hosting offering by moving a WordPress blog from some shared hosting to Lightsail. I have tried EC2 in the past so this was not my first experience with AWS, but I was curious to see what their new more consumer-based hosting was like.

Creating an Instance

When you create an instance of Lightsail you chose how big or small you want it. You also chose whether you want just the OS, or you can have an app pre-installed with Bitnami (“App+OS“). The options for the app include a pre-installed WordPress blog, LAMP, MEAN, LEMP or several other applications. Alternatively, you can choose “OS Only” where you currently have the choice of either Windows or Linux flavors: Amazon Linux, Ubuntu, Debian, FreeBSD, openSUSE and CentOS.

I went with the PHP 7 LAMP stack option in the smallest size ($3.50 per month). I chose this option because I wanted to make sure WordPress was exactly the way I wanted it. And I wanted to see what the LAMP option was like.

In the price you also get a dedicated IP which makes setting up a breeze before pointing the domain name at the new instance, definitely a nice touch.

The LAMP 7 option came with PHP 7.1. But it’s possible to upgrade. All the elements of LAMP come pre-installed (Linux, Apache, MySQL and PHP) but you’ll want to configure them to your needs.

The main thing you can say about the setup is that it was lightening fast. Within seconds I had a fully operational instance. In the past, when setting up some hosting you might have assumed it would take at least a couple of days. Because the dedicated IP is plainly visible on the AWS console, you can immediately see the default index page in your browser.

First Look at Lightsail App+OS

The main difference between the “App+OS” option and a normal VPS is Bitnami. You notice right away is that the default username is “bitnami” and after logging into the Linux console you get a large “Bitnami” logo at the top of the screen.

So, what is this Bitnami?

Bitnami

Amazon AWS has so many quirks that you might assume that Bitnami is an AWS thing, as is their own Amazon Linux, but it is quite widely used in cloud computing (including Oracle Cloud and Google Cloud Platform).

With the “App+OS” Bitnami a lot of the things you normally have to do to set up a LAMP stack are already done for you. For example, Apache is pre-installed with most/all modules and even MySQL is pre-installed. However, to find your root login for MySQL you’ll need to look for it, see below.

With Bitnami other slightly unusual thing you notice upon logging into SSH or SFTP is the directory structure, the apache.conf does not look the same as normal, and where are the virtual host files?

Bitnami uses httpd-app.conf, httpd-prefix.conf and httpd-vhosts.conf files, as described here.

This is unusual and I imagine many people who do not want to use Bitnami would want to use the “OS Only” option. While it may take a little longer to set up, once that’s done you have a “normal” Bitnami-free Linux instance.

Transferring a Website to Lightsail

Having gone with the LAMP (PHP 7) option I basically followed my guide from here to move a WordPress blog over to different hosting. With minimal setup to do it was mainly a case of setting up the database, installing WordPress then using WP-CLI to install the plugins and theme.

As the instance was just going to be hosting one website I didn’t have to worry at all about the virtualhosts as everything was set up to just work from the off.

My first question was how do I login to MySQL. The login info for MySQL did not appear to be anywhere in the AWS console. To find the password for the root user you need the Bitnami Application password. From the home directory (where you arrive after logging in) just type…

$ cat bitnami_application_password

Transferring everything across, most things just worked. While PDO worked fine in normal PHP pages, I had to tweak the php.ini to get PDO to work from a script run with cron. For me, I just had to uncomment the .so file for PDO which was almost the last line of the php.ini.

After changing something like the php.ini you’ll have to restart. The following command seems to stop everything (apache/HTTPd, PHP and MySQL, ), then restart everything; perfect for making sure everything gets restarted all at once but not very graceful (from here)…

$ sudo /opt/bitnami/ctlscript.sh restart

To just restart apache you’d just add “apache” to the end…

$ sudo /opt/bitnami/ctlscript.sh restart apache

Linux

While some things are very different in Bitnami, it’s basically just a Linux instance. The Linux version I got with the LAMP (PHP 7) option was actually Ubuntu 16.04, so if you want the latest version of Ubuntu (18.04 is currently the latest LTS), or a different flavor of Linux, chose the “OS Only” option. I am most comfortable with Ubuntu/Debian and a lot of the standard CLI functions are exactly the same as Ubuntu.

Nano comes pre-installed and was the default editor for the crontab.

$ crontab -e

BTW, cron needs the full path to php, i.e. something like…

* * * * * /opt/bitnami/php/bin/php -f /opt/bitnami/apache2/htdocs/scripts/index.php "name_of_method()"

Then…

$ sudo service cron reload

The timezone is quite important because it can also affect your keyboard layout when typing into the Linux terminal. Changing the timezone is based on Ubuntu 16.04, so something like this would work to list the timezones, select a timezone then check which timezone you’re using…

$ timedatectl list-timezones   
$ sudo timedatectl set-timezone America/Vancouver  
$ timedatectl

Now, that the Linux timezone is set, you may also need to update the timezone PHP uses by updating this line in the php.ini…

date.timezone="Europe/London"

For all the PHP timezone variables, click on your region from the PHP timezones page.

Something else that is the same as Ubuntu is updating and upgrading…

$ sudo apt-get update
$ sudo apt-get upgrade

Once you get used to the quirks and the different directory structure with Bitnami, most things seem the same as a typical Ubuntu instance.

Issue(s) with AWS Lightsail

The first “upgrade” was a large one which took a while. It took so long in fact that either putty went inactive, or my computer went to sleep, or both. After this, the website went down and I had no access to SSH. What I seemed to have to do was not “reboot” the instance, but “stop” and “start” the instance from the AWS console. After this, I had a different public IP address but I was able to fix whatever had happened with the upgrade.

If the restart is the opposite of graceful, stopping and starting was similarly very ungraceful, comparable to doing the same thing with any VPS instance.

Apart from some minor changes that will probably be easy to get used to, I did not have many issues at all.

AWS Lightsail App+OS: Conclusion

Bitnami saved some time during setup, but honestly, any time I saved was probably offset by time spent figuring out what was going on with Bitnami.
I’m not 100% sure that the speed of setup of Bitnami is worth the changes it makes to the Linux operating system. For something like this example, a WordPress blog, that isn’t going to need a lot of administration, the “App+OS” option was fine though.

If you are a purist and don’t mind setting up the Linux instance with everything you need there is always the “OS Only” option which I don’t believe uses Bitnami. This would be better for a website where you’re going to want to make more changes to the Virtualhost file and/or possibly upgrade to an EC2 instance in future. If you are already a full stack LAMP developer you’ll probably be wanting to use that option for any actual development. App+OS seems to be mainly for people who do not want to get too involved with the “L” or “A” parts of LAMP.

AWS Lightsail with the App+OS option is perfect for someone who just wants to have a cheap WordPress blog running on AWS, as I did here. For a brand new blog, choosing the “WordPress” option would simplify the whole process even more.

I’d say App+OS might also be a good way to play around with something new such as MEAN before starting an actual project with it. Everything would be pre-installed so you could get straight into the javascript and the NoSQL.

So far so good. The instance seems fast for a WordPress blog, it certainly is compared to the previous shared hosting. And, very affordable.

Once upon a time you could buy a domain name and hook it up to some cheap, shared hosting and that was all you had to do. You could build your website or install a WordPress blog and no further configuration was really required. These days you can still do this but you are leaving yourself open to security, speed, and privacy issues. Surfers becoming more aware of which browsers are safe and which aren’t through information from their browser and anti-virus programs. But not only this, search engines are also starting to penalize websites which do not protect the surfer’s privacy, are slow or are insecure.

There has been a desire to improve search engine rankings by doing SEO work for a long time. Now, in 2018, SEO is different to how it was 15 years ago and it’s importance is joined by security, speed and privacy as the four things everyone should be looking into. I have labeled each header to show which of the four sides the technology is used for.

These are just my thoughts at the moment, much of it will be my opinion. There are people who know much more about everything here so my advice would be to do more research before making any changes to your websites.

Best Practices for Websites in 2018

These best practices are the current general buzzwords for all websites that I think people may be slow to adopt. These should be added to the specific best practices for whichever kind of website you have concerning permissions, owerships, coding, code injection, etc.

This guide is just a quick look at all the topics. I may have missed some topics out. There is a short discussion and generally link(s) to follow to get more information, or tutorials to follow.

If you use a CDN, like CloudFlare, some of these may already be done without you having to think about it but they are good to know about, especially if you do not use a CDN. Also, if you use managed or shared hosting you may not be able to change some of these, but they may already be done for you by your hosting company.

Here are some best practices for websites in 2018. Some of these used to be nice-to-haves but are fast becoming must-haves, if they are not already.

Google Audit (speed)

Google Audit on the Chrome browser has replaced Google Pagespeed and offers a lot more detail than before as to how Google views your website.

Much of what Google Audit looks at is the speed of your website, especially over mobile networks. So, it wants the content that first appears on the browser to appear very quickly, loading content from further down the page afterward. The Audit is mainly the content of the website and how quickly it loads. The harshest test will be to run it in mobile mode with “3G/PC throttling” switched on. Google wants the above the crease content to be displayed quickly even on slow 3G.

Every time I do an Audit I have to be prepared for it to be painful reading. The good thing is that it highlights issues that you might not have seen, or you might have thought that they were fixed.

SSL/HTTPS (privacy)

Enabling SSL encryption and forcing your website to use HTTPS has hit the headlines, mainly because of the changes to browsers which are meaning that HTTP-only websites are starting to look like bad places to visit.

The docs for Apache SSL are here… link

The disadvantage with just using SSL encryption by itself is that the website can often be much slower than it would have been without the encryption due to the extra handshakes that are needed with HTTPS. But, there are ways to further tweak HTTPS that will improve the security and also the speed of HTTPS websites. Most of these changes can be made to the SSL config file in Debian if you add them to another file (e.g. apache2.conf or a vitualhost file) make sure that there are no conflicts.

To check your own site to see how it ranks for security this website gives a good overview and even gives you a grade to show exactly how secure it thinks your site is.

TLS Session Resumption (security/speed)

TLS Session Resumption is configured in the SSL config file on Apache web servers. By default, it should be enabled. Check whether this is enabled for your website at SSL Labs.

TLS Session Resumption is the default with Cloudflare Flexible SSL… link.

HTTPS/2 (speed)

If you use a CDN HTTP/2 may already be setup, or it may be an option that you can select. If you do not use a CDN you should check that your server is compatible with HTTP/2 like I did in this post.

Enabling HTTP/2 before HTTP/1.1 looks like this…
Protocols h2 http/1.1

HTTP/2 wiki… link

HSTS (speed/security)

HSTS wiki… link

How to use HSTS… link

On Debian you have to allow headers a2enmod headers then add this to the virtualhost file or the apache2.conf file…

15552000 seconds is 6 months.

# Use HTTP Strict Transport Security to force client to use secure connections only
Header always set Strict-Transport-Security "max-age=15552000; includeSubDomains;"
Header always set X-Frame-Options DENY
Header always set X-Content-Type-Options: nosniff

Then restart apache and test with SSL Labs.

Perfect Forward Secrecy (privacy/security)

Enabling Perfect Forward Secrecy (FPS)… link also link.

Use TLS (security)

Some cryptographic protocols are deprecated because they are able to be hacked and are thus insecure. Very old browsers may not use the TLS 1.1 or TLS 1.2, so you have to strike a compromise between security and accessibility. If you think a lot of your viewers may have older browsers you can keep SSL 2.0, SSL 3.0 and TLS 1.0 enabled, however, these are all insecure. Just allowing TLS 1.0+ is better. Just allowing TLS 1.1+ is much more secure. The risk of forcing too high a cryptographic protocol is that there may be people using browsers who do not support your current protocol. It’s a balancing act which comes down to your own decision about what is more important security or accessibility.

If you just wanted to allow TLS 1.1 and TLS 1.2 you would add this to your ssl.conf or apache2.conf (in Debian). Be careful that there are no conflicts between these two files and the individual virtualhost files…

SSLProtocol TLSv1.2 TLSv1.1

You can check which browsers use which cryptographic protocols at this link.

DNS CAA (security)

Specifying which domain(s) you want to issue certificates to your website also makes your website more secure. You do this through your domain registrar or CDN, where available.

On CloudFlare using their Flexible SSL you would need the following…

example.com. IN CAA 0 issue “comodoca.com”
example.com. IN CAA 0 issue “digicert.com”
example.com. IN CAA 0 issue “globalsign.com”
example.com. IN CAA 0 issuewild “comodoca.com”
example.com. IN CAA 0 issuewild “digicert.com”
example.com. IN CAA 0 issuewild “globalsign.com”

Taken from the CloudFlare blog

See also…

  • TLS False Start (Speed)
  • OCSP stapling

DNSSEC (security)

DNSSEC was designed to protect applications (and caching resolvers serving those applications) from using forged or manipulated DNS data, such as that created by DNS cache poisoning. All answers from DNSSEC protected zones are digitally signed. By checking the digital signature, a DNS resolver is able to check if the information is identical (i.e. unmodified and complete) to the information published by the zone owner and served on an authoritative DNS server. While protecting IP addresses is the immediate concern for many users, DNSSEC can protect any data published in the DNS, including text records (TXT) and mail exchange records (MX), and can be used to bootstrap other security systems that publish references to cryptographic certificates stored in the DNS such as Certificate Records (CERT records, RFC 4398), SSH fingerprints (SSHFP, RFC 4255), IPSec public keys (IPSECKEY, RFC 4025), and TLS Trust Anchors (TLSA, RFC 6698).

link

Caching (speed)

CDN

CDN. See also clouds.

Service Workers

Javascript creates a cache of the website on the viewer’s machine so that they can still view your website if they lose internet connection.

301 Redirects (SEO/speed)

Having pages that load quickly and not having duplicate content are big parts of SEO. 301 redirects tell search engines and browsers that they should be using a certain URL. For example, you should be redirecting from HTTP to HTTPS, and you can redirect from non-www to www or vice versa.

Canonical URLs (SEO)

With a canonical URL you are telling the search engine the exact URL it should be using. This is another method of ensuring that the pages are not going to be listed several times and appear to be duplicate content to search engines.

Schema.org (SEO)

You add schema to your HTML markup. This is mainly for search engines as it is not visible on the page… link

Have a Privacy Policy (privacy)

GDPR was launched in the EU in 2018. Data protection has been around for a long time, but the addition of GDPR means that websites who have European visitors should definitely consider having a privacy policy. This is all to do with collecting data on individuals, and how that data is used. It’s probably safer not to collect any data at all or as little as possible. I know useful stats-based websites that have closed as a direct result of GDPR, which is a shame. On the plus side, it gives Europeans more control over their data, which is probably a good thing.

Conclusion

Do an analysis of your website on SSL Labs and do a Google Audit. Both sets of results will give you a list of things that are good and things that are bad. You can seek to improve the things that are bad, some of which will be listed in this article. It is probably not possible for mere mortals to get 100% perfect, but a lot of these steps are both free and easy to implement so it’s worth trying to get as high a score as possible.

I have focussed on privacy, security, speed and SEO in this guide. There are considerations that have always been around or are not especially new for 2018 such as accessibility and having a mobile-friendly website which should also be looked at if you do not already.

Some of this is primarily aimed at mobile users. Google Audit and service workers, in particular, are very concerned with how the website behaves on mobile connections which may be intermittent. The benefit of working on these, along with having a mobile-friendly website is that you may well get more mobile visitors. Google wants to send mobile visitors to websites they’ll enjoy using, thus it is gradually increasing their importance on it’s mobile rankings.

Here are some tweaks you can make to various parts of Linux to make the whole experience a little easier and more intuitive. This guide is for Ubuntu and Debian flavours of Linux.

Change the Config Editor to Nano

The Debian default editor is Joe, if you do not know this text editor, change it to one you know, i.e. Nano. This command gives you all the options you have available so you can select the editor you wish to use…

sudo update-alternatives --config editor

Now, commands that use the default editor such as visudo will use your chosen editor.

Turn off passwords for a User

Once the config editor is nano, you can edit the sudoers file with the visudo command…

sudo visudo

One thing you might want to do is turn off passwords for yourself so that you do not have to keep typing the password when you run sudo commands. Add this like near the end of the file, after the “%sudo” group line…

myuser ALL=(ALL) NOPASSWD:ALL

Tweak Nano

Some changes I like to make when I first set up Linux are on Nano. I like to put smooth-scrolling on and to turn the keypad back to numeric. To do this, edit the nano config file by running this command as either root or with sudo…

sudo nano /etc/nanorc

Installing More Than One Version of PHP

To list all the versions of PHP that are installed you can type…

update-alternatives --list php

This then allows you to switch between them if you have more than one installed at any one time. Really this is more useful on a development machine, it is probably not needed on a web server.