I look up everything before I use it. An IDE such as PHP Storm has the PHP docs built in so can tell you a lot of the information you need, or there is the old-school looking on php.net or Stackoverflow.

I had to check whether a string ended in a certain needle and I stumbled upon this Stackoverflow post.

The accepted answerer, ircmaxell, lays out the various options you would have to check for the last character being a comma. He seems to favor string functions such as this because presumably they make more sense to him. All this is saying is that one character in from the end of the string should be a comma. Simple and clear…

if (substr($string, -1) == ',') {

The above is technically shorthand for something like this, below. Here you are manually checking the length of the string and saying that the character in last place should be a comma…

if ($string[strlen($string) - 1] == ',') {

The next version is a bit different. Moving from the end of the string, backwards to the start, he looks for a comma, if the comma is found he looks to see if it’s position is at the end of the string…

if (strrpos($string, ',') == strlen($string) - 1) {

The answerer does not seem to like the following version but I love it! preg_match is a way you can use amazing powerful regex in PHP. The regex is saying if the comma is at the end of the string the expression will be true. When you are just looking for a comma, one of the earlier ways might be simpler, but doing the calculation this way can prepare you better for more complex problems…

if (preg_match('/,$/', $string)) {

The method the answerer hates the most is the last example he gives. He converts the string into an array saying to split the string up wherever there is a comma. Then, the last part of the array will be ” if the string ends in a comma. This may not be very readable and it may not be the best way to solve this particular problem but the mechanics of solving a string problem this way, with an array, can be very powerful too…

if (end(explode(',', $string)) == '') {

Stackoverflow does not (or did not, at least) tolerate opinion very much, but we see some opinions creeping into the comments…

I know it's subjective, but I find the preg_match one to be the most readable – mastazi Sep 21 '15 at 0:23

Then, also in the comments…

From PHP 7.1 you can use if ($string[-1] == ','). It's clear and faster than using substr(). – Nick RiceJan 21 '18 at 11:35

So, in a later version of PHP they have created a shorthand version of the answerer’s preferred method!

For my specific problem, I was not looking for a comma, I was looking for a string of unknown length. This forfeits most of the answers and I went with the preg_match method.


If I had the problem of looking for a single comma at the end of a string I might use one of the other methods listed here, or, if I was trying to find it so that I could delete it I might use a rtrim to remove the offending comma. Or, are we looking for rogue commas that should be full stops? If that’s the case it might be better just to make sure the last character is a full stop. Rather than 1) search for it then 2) to fix it, maybe we can do both steps in one.

Sometimes the question isn’t “how do I do x?” the question should sometimes be “why am I wanting to do x?”

Coding that is simple yet powerful allows you to get many different problems solved. As such, using regex is one of my favorite coding methods. In PHP you might use it in a preg_match or preg_replace but regex can be used in many different programming languages although it can sometimes be slightly different from language to language.

Regex101 is a good place to try out regex before inserting it into your code.

Negative lookaheads

You might have a pair of tags and instead of using PHP’s strip_tags() function you might want to do something a bit more custom with regex.

To get everything within specific tags you could do something like this, which assumes there aren’t going to be any other tags within the tag you’re looking for…


You’re looking for a <tag> then anything except a < then a </tag. Putting it inside brackets () makes it a capturing group.

If you thought there was a possibility there would be other “h1” or “p” tags inside your tags you would need to be more specific, possibly using a negative lookahead like…


This time (?!<\/tag>).* means that you are looking for anything that isn’t </tag> which means that another tag with a different name is not going to affect it.

But, even this is not perfect. If you had two tags with the name “tag” nested within each other this might not give you the answer you were looking for.

More information on negative and positive lookaheads, lookbehinds and lookarounds here.

(If there are no faces here click into the article to load the CSS. CSS is not loaded in the homepage or archive pages.)

Bart Simpson Head Animation

How did we manage before smartphones? The other day I saw an article about the Simpsons in CSS in my news feed on my phone. Its from 2014 but Google thought it was important enough to tell me about it in 2020. Thirteen characters from the Simpsons have been drawn (and many animated) in nothing but HTML and CSS. This “Bart Head” CSS is taken from The Simpsons in CSS.

My Simple Version of the Animation

Here is my attempt at copying some of the very cool CSS in a very basic way…

Explanation of the CSS

  • Both of the faces above are created by nesting DIV elements within each other.
  • Sizing is explicitly specified by using CSS box-sizing.
  • The order of the DIVs is crucial to hiding what we want to hide.
  • The animation has a duration, a delay, starts straight away and runs forever.
  • The CSS keyframes allow us to modify any part of the element’s CSS at any point in the duration.

The basic principal to make the complicated shape of the Bart head is to make rectangular elements with/without colour then apply skew, rotation, radius and borders to make different shapes which are pieced together in a particular order, overlapping them to make the face.

The outermost DIV is the canvas and this is set to “position: relative” and centered. Everything directly within the head has “position:absolute” and uses the canvas as the parent or containing block. Nested element use the parentas the containing block. More information about CSS positioning docs.

Nesting one element inside another allows the parent DIV to act like a mask to the child element(s) by using “overflow: hidden”. This technique can be seen most clearly in the eye.

The eye itself is a circle. In this case, a circle is a square div with “border-radius: 50%” and a white background. The outline of the eyes extends outside the outlines of the face in my animation, this is because the eyes are after the face and not inside the face, although they could also be inside the face if overflow was visible. The pupil is a DIV within the eye DIV and its position is absolute so it uses the parent DIV as it’s axis. The eyelid is also within the eye but after the pupil – when the eyelid animation moves downwards it covers the white of the eye and the pupil completely. The eyelid is actually a square shape in my animation but we only see the part of the eyelid that is within the circle of the eye. I added opacity changes to the eyelid so that it’s invivible when it is not in motion but with the masking effect of nesting the DIVs this is not needed, and was not used in the Bart head.


There is a lot of CSS that I have not used in my work so far, and even more that is Sass/SCSS. This is an example of some very clever SCSS: background image.

This article about different ways to make a circle in HTML/CSS is also interesting in a very practical way.

Here’s what you do to set up an AWS Elastic Beanstalk instance and update it through GIT. So, you’ll need the correct access on your AWS account and here we’re going to use Elastic Beanstalk (EB), RDS, IAM, EB CLI, Github and GIT.

Setting up Laravel on Elastic Beanstalk

This setup is for an Ubuntu machine. Most of this comes from this url

  • Create a new private repository on Github (or wherever you want to store your GIT repo). It should be private because you will be adding your .env file.
  • Clone the repo on your local machine (in this case an Ubuntu Desktop).
  • In the directory, you’ve cloned the repo to you install a fresh version of Laravel. You may need to install it into another blank directory then copy the files across as the directory you want to install Laravel into isn’t empty.
  • Check that Laravel is working locally.
  • Once you have a working version of Laravel you can save the contents of the Laravel directory minus the vendor folder to a zip file using the command…
  • zip ../laravel-default.zip -r * .[^.]* -x "vendor/*"
  • Create a new EB instance and use the default application to begin with. Your EB url should now give you a holding page when you go to it in a browser.
  • Now to put your Laravel project onto the EB instance you click the “Upload and Deploy” button and select the “laravel-default.zip” you made previously.
  • Now, when you go to the EB url there may be an error, so instead put the “public” directory in the url, then it should work. To fix this, go to “Configuration” > “Software” and the “document root” is the first option on the form, make it “/public”.

Connecting Elastic Beanstalk to an RDS database

At this point, you should have a working version of Laravel on your EB that you have uploaded manually and that isn’t connected to a database.

  • To connect to a database modify “Database” in “Configuration”. This is where you can make an RDS instance for your website.
  • Once the RDS is made you’ll still need to allow access to the EB instance and your local machine. Go to the RDS instance and under the “Security” tab there should be a “VPC security groups” heading, click the url below it.
  • Having clicked on the link you should now see some tabs that include “Inbound” and “Outbound”. Click inbound and add “MySQL/Aurora” for your local IP this makes a rule for port 3306.
  • Also, to allow the EB instance to access the RDS DB you’ll need to add it’s security group. In the “source” field start typing “sg-” to get a list of all the available security groups and select the appropriate one then “save”.
  • You can now edit the “.env” file with your RDS information and it should be able to connect locally and from your EB instance.
  • Test out your new database by running the migration locally, if it works you can assume it should work from EB so update by making another zip with the updated files then “Upload and Deploy”.

Your AWS EB instance should now be able to chat with the database freely but you’re still updating with the zip files.

Deploying from GIT to your Elastic Beanstalk instance

Most of this comes from this url and this url

  • Update your .gitignore file then add everything you need from your project directory to the empty repo you set up on GitHub at the start. Once again, the vendor file should be missing along with any junk from your IDE.
  • Install the latest version of Python 3.
  • Run this command to make sure the default value of Python is the one you want: python3 --version . If it isn’t you’ll have to follow something like this url. NB: python and python3 are different, make sure to follow the advice from the url but use python3 instead of python.
  • Once EB CLI is installed all you need to do is run eb init from your project directory to set up your GIT repo with EB. You’ll need to get an ID/secret from a user in IAM. Once you have created/clicked on the user you would go to “Security credentials” and create an access key. You’ll have to remember this info as you won’t be able to access the secret more than once.
  • Next, with EB successfully initiated all there is left to do is deploy. So, commit your current working Laravel site (and push to GitHub if you like). EB will be using the current version you have committed to your current branch. To do this run the command eb deploy --staged

Believe it or not, that should have deployed to your instance. Huzzah!!


If the Ubuntu machine you are developing on is a new build or has not been used recently for development you’ll need to update and upgrade. When installing Laravel you’ll need to install all the PHP modules it needs (in my case it was “mbstring” and “dom”), you’ll also need to make sure “mysql-server” is installed locally.


This process is not too bad at all. There are different ways to update an EB instance, e.g. using AWS CodePipeline, but if you prefer Github over AWS’s CodeCommit this method is very straightforward to setup. Using this method on a Production website as I’ve described it here would mean testing locally then only deploying once the work was tested locally. You could also set up a dev server and deploy there first for further testing before deploying to the live website.

In this post, we looked at some ways to tighten up security and increase the speed of websites in modern times.

How to Implement Security HTTP Headers to Prevent Vulnerabilities? talks about some of the headers that should be modified from their defaults for increased security.

The list of headers they give are…

  • X-XSS-Protection
  • HTTP Strict Transport Security
  • X-Frame-Options
  • X-Content-Type-Options
  • HTTP Public Key Pinning
  • Content Security Policy
  • X-Permitted-Cross-Domain-Policies
  • Referrer Policy
  • Expect-CT

One of the things mentioned in my Best Practices for Websites in 2018 article was HTTP Strict Transport Security (HSTS). Using a CDN, like CloudFlare, HSTS can be included very easily in the free version. However, some of the other headers are only able to be added on the Enterprise version.

Other things it can be helpful to remove from the headers are the exact versions of Apache and PHP. Although, to be fair, there are only a finite number of web servers and programming languages so protection by obscurity is fairly limited.

Updating Headers in Apache and Ubuntu

First of all, a standard fresh install of Ubuntu might not have the headers module installed so add it by…

sudo a2enmod headers
sudo service apache2 restart

Then, most articles say that you should add the headers to the httpd.conf file. This file does not exist in a fresh install of Ubuntu so you have to make it in the location /etc/apache2/httpd.conf then include it in the apache2.conf like so…

Include httpd.conf

Once this is done you can start adding headers to it.

You should check whether you have a httpd.conf or not before you make one. Bitnami creates a httpd.conf that is already pre-populated with a lot of lines of code. This kind of pre-setup is the whole reason that Bitnami exists.

Removing the Version of Apache from the HTTP Headers

Web servers (Apache, Nginx, IIS) typically do not want you to remove them from the headers because it is a way of showing the world how popular they are. Like social media for web servers. So, the method of removing them can be relatively tricky.

One alternative is to use cloudFlare which gives the Server variable the value of “cloudflare”… Easy!

Alternatively, another easy fix is to remove the version number from Apache and just leave the word “Apache” visible.

With headers enabled and httpd.conf included in the apache2.conf you can add the lines…

ServerTokens Prod
ServerSignature Off

After restarting Apache the version of Apache should now be gone. You’re telling Apache that the website is in production so turn off the signatures.

Similarly, adding the following line will remove the ability to do a telnet trace of the website, although this still tells the person you’re using Apache.

TraceEnable off

Remove the Version of PHP from the HTTP Headers

The default for PHP is to not show the version of PHP in the headers, however, I found recently that in a Bitnami install it was actually shown by default.

You can turn this off in the php.ini…

expose_php = Off

As this is a PHP setting it will be the same in Nginx, IIS, etc.

After you have done this you’ll need to restart PHP, like this…

sudo service php5-fpm restart

Updating HTTP Response Headers in Apache

The rest of the headers listed above can be updated in the httpd.conf. Here are a few standard ones that do not need any modifications…

Header always set Strict-Transport-Security "max-age=15552000; includeSubDomains;"
Header always set X-Frame-Options DENY
Header set Referrer-Policy "no-referrer"
Header set X-Permitted-Cross-Domain-Policies "none"
Header set X-XSS-Protection "1; mode=block"
Header always set X-Content-Type-Options "nosniff"
Header always set Expect-CT "enforce, max-age=300, report-uri='https://www.repoting-website.com/'"

The modifications to the httpd.conf will be very similar but different to the lines that would be added to the nginx.conf for Nginx.

Testing the Security of HTTP Headers

The first way to test your headers would be to inspect them in your browser. For Chrome, you would “inspect” then go to the “Network” tab. If the Network tab is empty, reload the page. Once the list of items is populated you can click on the main website which should be at the top, then on the right, there should be a “Headers” tab that lists all the headers.

SecurityHeaders is a great website for testing the security of your headers. The website completely ignores web server version and programming language version as you could argue that removing them does not offer much protection against an attack. Instead, it focuses on the instructions your website is giving browsers from its headers.

Another useful link for updating your HTTP headers that gives examples for different web servers is Hardening your HTTP response headers.

When you first build a website the database may seem fast. All your queries may get executed quickly. But, after a while when the database is much larger the queries may start taking longer. If you haven’t already, it may be time to optimize the MySQL queries!

This is a quick guide to optimizing MySQL queries. It’s more general and theoretical than being a step-by-step tutorial.

In the Beginning

Creating your tables properly in the first place will save you headaches down the road. In the beginning, when you are creating your tables make sure that all the columns are of the right type (INT, VARCHAR, TEXT, ENUM) and that they have a size where possible. For some columns, it’s better to use varchar than text because varchar is limited to 255 characters while text is pretty much unlimited. However, you’ll probably need some text columns too so it’s just a case of using them in a way that should be relatively painless as the site grows.


Put EXPLAIN before your MySQL query to find out what is going on. It will give you various useful information such as the type of query MySQL is running on each table in the query and the number of rows it is searching through.

The possible types are (good to bad)…

  1. const/eq_ref
  2. ref/range
  3. index
  4. all

It’s better to have the type “index” than the type “all”, but “eq_ref” is better still. Having a type of “all” is the worst thing for your query.

Select Explicitly

Select explicitly, don’t use SELECT *. The worst thing to do is return everything when you have a large number of columns in the table and you only need one or two columns. Only selecting what you need will make a more manageable MySQL query.

If you are doing a large search and returning a lot of rows, is there a better or quicker way to do what you are trying to do. One possibility might be to only return the id of the rows then if you need more information you can get it later. That’s just a possibility and would depend on the query and what you needed to do.

Remove Functions

In some RDS databases you are able to use functions and still be optimized, not in MySQL. Using MySQL functions in the query means that an index won’t work.

This is an example of using the year() function. This is bad…

SELECT id FROM blog WHERE YEAR(date)='2016' ORDER BY id DESC

It’s better to use BETWEEN for dates…

SELECT id FROM blog WHERE date BETWEEN 2016-01-01 AND 2017-01-01 ORDER BY id DESC


SELECT id FROM blog WHERE date >= 2016-01-01 AND date < 2017-01-01 ORDER BY id DESC


So, you’re selecting only what you want from the query and not using any functions, it’s probably time to index!

The aim is to not have any “all” types for any of the tables when you run the EXPLAIN on your query. You might also be able to get the number of rows searched down, but that might not be possible.

Simply, you put everything from a table in a query into the index.

The order of the indexed columns matters!

Run the query you are trying to optimize in MySQL or PHPmyAdmin making a note of how long the query takes to execute. Add the indexes, run an explain, then tweak the indexes, or if they look ok, run the actual query. When you run the query after optimizing it should be quicker, or at least not worse. If the query is worse your index may have columns in the wrong order.

One thing to remember is that making an index for one query might speed up that query but if another query uses the same index it may actually slow that other query down.

Also, indexing properly should speed up SELECT queries but it will have the opposite effect on UPDATE and INSERT statements.


LIMIT doesn’t necessarily mean you only search that number of records. You may still be searching all the records then discarding the rest. Or, you may only be searching that number of records when you need to search the entire database. Use LIMIT with care!

This is a simple step-by-step guide to making a PHP composer package that can be listed publicly on packagist.org for anyone in the world to install. We’ll be using Github to update the package. We’ll also add some testing with PHPUnit.

Creating a Composer Package

The basic steps to creating a new composer package are as follows.

  1. Create a Github Repo for the project
  2. Clone the Github repo locally
  3. composer init
  4. composer install
  5. Write the PHP composer package, put the PHP into a src directory.
  6. Commit and push to Github
  7. Give the package a version by using git tag 0.0.1 then git push --tags
  8. Login to Packagist and add the new Github repo to your packagist account.
  9. Make sure all the info packagist.org needs is in your composer.json

After following these steps, your package should now be published on packagist. Any future changes you make should be pushed to Github and make a new tag like this… git tag 0.0.2 and git push --tags. You can list all the tags with git tag. Every time you update in this way, Github will get updated, and packagist will also get updated automatically.

Our class is called Bar, so our main PHP file has to be Bar.php (upper/lowercase matters!). We’ll put it in a directory called “src”…

namespace Foo;
class Bar {
    public function helloworld(){
        return 'Hello, World!';

Here is a sample composer.json file. Our namespace is “Foo” so we say that Foo is in the src directory in the composer.json…

    "name": "foo/bar",
    "license": "MIT",
    "require": {
        "php": "^7.0"
    "require-dev": {
        "phpunit/phpunit": "^5.7"
    "autoload": {
        "psr-4": {
            "Foo\\": "src/"

To use the package, import it by copy/pasting the command line instructions from Packagist. It’ll be something like this… composer require foo/bar. Then, once the package has been installed into the vendor directory, you can start using it, like this, for example…


require_once 'vendor/autoload.php';

$test = new Foo\Bar();


Updating the Package

For testing purposes. Each time you update you need to make sure the latest version is downloaded from Packagist.

Make sure the composer.json of the project you’re inporting the package into has a composer.json like this. You’ll need to make sure the package you’re testing is greater than or equals to >= instead of ^which specifies an exact version.

    "name": "neil/test",
    "authors": [
            "name": "neil",
            "email": "[email protected]"
    "require": {
        "foo/bar": ">=0.4.3",
        "phpunit/phpunit": "^6.5"

But, even then composer update may still not do anything when you update the package. You may need to composer clearcache first, then update composer. Also, sometimes there is a short lag in Packagist updating so don’t get too worried if it doesn’t update straight away first time.

Testing the Package

To add testing you might want to use something like PHPUnit or PHPSpec. This is using PHPUnit 6.5 which runs with PHP 7.0…

composer require --dev phpunit/phpunit

Make a directory called tests and make a file called BarTest.php…


use PHPUnit\Framework\TestCase;
use Foo\Bar;

final class BarTest extends TestCase

    public function testOutputsExpectedTestString()
            'Hello, World!',

Then, making sure you’re using the “dev” packages you can run the test in the command line like so…

vendor/bin/phpunit --bootstrap vendor/foo/bar/src/Bar.php vendor/foo/bar/tests/BarTest

Unit tests will only work on public functions, not private functions.


The name of the class must be exactly the same as the filename, and vice versa. If your class is called SomeClass, the file must be SomeClass.php.

Errors can also come from not using git tag to create a version or not having all the info packagist needs in the composer.json.

To create a bash script that will work only for your user, you can store the bash files in your user’s home directory. The standard place to put them would be a folder called bin. Create it if it does not exist, then create the file. The name of the file is the name of the command you want to type to run it. So, if I want to call my command “commandname”, I would do…

mkdir ~/bin
sudo nano ~/bin/commandname

Then, create the script with the shebang! at the top…


# Update, upgrade, then restart
apt-get -y update
apt-get -y upgrade
apt-get autoremove
service apache2 restart

# update WordPress through WP-CLI
cd /var/www/html
wp core update
wp plugin update --all

Now, make the file executable…

sudo chmod +x  ~/bin/commandname 

Then, to run the file from any directory, you’ll have to update your user’s .profile file…

sudo nano ~/.profile

Adding the following to ~/.profile tells linux that there are executable scripts in the ~/bin directory…


Now you should be able to run the command, commandname, from any directory.

You can run this manually from the command line or you can create a cron to run it at regular intervals.

1 * * * * /bin/bash -c "~/bin/commandname"

Then reload the cron with…

sudo service cron reload

You can monitor the cron log in real-time with tail…

tail -f /var/log/syslog

Trying to connect to use AWS S3 for the first time can be confusing. Here is a quick guide to roughly what has to happen.

Basic Steps to Set up S3

The only two sections you need in the AWS console for this are “S3” and “IAM”…

  • Create a S3 bucket.
  • Make a S3 bucket policy.
  • Create a policy to access the bucket in IAM.
  • Create a “programmatic-only” user for the bucket and attach this policy to it (IAM).

Store the info for your user (the secret will not be displayed a second time). You’ll use this to connect with the S3 instance in your code.

Easy, huh?

AWS is pretty good at telling you when you’re about to do something stupid during this process. There are plenty of warning signs on the screen if you make anything public. AWS is all about security and particularly dislikes us making things public that should not be public.

The bucket policy and user policy are both in JSON format.

There is a website to help you make the correct JSON but the form itself is pretty good at telling you if there is an error in your policy or you’ve made your bucket public. An example Bucket policy might look like…

    "Version": "2012-10-17",
    "Id": "Policy123465789",
    "Statement": [
            "Sid": "Allow ALL access to the bucket by one user",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111111111111:user/myusername"
            "Action": "s3:*",
            "Resource": [

The user policy (IAM) is created by a wizard but then if you want to edit the policy you see that it is also a piece of JSON. Each action is listed in the JSON so it would be very easy indeed to simply delete the actions in the JSON, or just unselect a specific action when you’re creating the policy.

AWS S3 Docs

The AWS S3 docs for PHP are pretty extensive but getting to the exact thing you need is not always straightforward. I found that I was making a lot of google searches in order to find what I was looking for because the navigation wasn’t that great.

This page on the AWS SDK for PHP S3StreamWrapper was particularly well laid out and useful. The main AWS SDK for PHP docs are pretty extensive as long as you know the name of the function you want to use.

Also, if you do a search for instructions on how to do a certain thing, like connect to S3, make sure the article is fairly up-to-date. Earlier versions allow you to connect in different ways that may not always work with the current version of the AWS SDK. Other things may be similar/the same.

As with most programming, there are often multiple ways to complete the same task. For example, in the example below there are at least two ways to output the JPEGs to the screen via PHP (see comments in the code).

You can use this PHP code in any AWS instance that you can code PHP in. I added this to my Lightsail instance but it would also work on EC2 or with any other non-AWS hosting…

S3 Gallery and JPEG Displayer Example

This is a simple code to turn every file in every bucket into a gallery. This is the code to create the list of “thumbs”. In this case, the thumbs are the large image but made smaller with CSS.


// Require the Composer autoloader.
require '../vendor/autoload.php';

use Aws\S3\S3Client;

try {

    // Instantiate the S3 client with your AWS credentials
    $s3Client = S3Client::factory(array(
        'version' => 'latest',
        'region'  => 'eu-west-2',
        'credentials' => array(
            'key'    => 'unique_string', // From AWS IAM user
            'secret' => 'unique_secret_string' // From AWS IAM user

    //Listing all S3 Bucket
    $buckets = $s3Client->listBuckets();
    foreach ($buckets['Buckets'] as $bucket) {
        $bucket = $bucket['Name'];
        $objects = $s3Client->getIterator('ListObjects', array(
            "Bucket" => $bucket

        // Show each one 200x200 and link to full-size file...
        foreach ($objects as $myobject) {
            echo "<p><a href=\"/showitem.php?item={$myobject['Key']}\"><img src=\"/showitem.php?item={$myobject['Key']}\" style=\"height: 200px; width: 200px;\"></a></p>\n";
        } // end foreach
    } // end foreach

}catch(Exception $e) {
    // Only show this for testing purposes...

Then, to display the files from the S3 bucket, we do not want to have the AWS S3 URL in the browser, so we’re going to display the images through PHP with the showitem.php file. Here is the code for that file, it’s a very simple image displayer


// Require the Composer autoloader.
require '../vendor/autoload.php';

use Aws\S3\S3Client;

$bucket = "my-bucket-name";

try {

    // Instantiate the S3 client with your AWS credentials
    $s3Client = S3Client::factory(array(
        'version' => 'latest',
        'region'  => 'eu-west-2',
        'credentials' => array(
            'key'    => 'unique_string', // From AWS IAM user
            'secret' => 'unique_secret_string' // From AWS IAM user


        $keyname= filter_var($_GET['item'], FILTER_SANITIZE_STRING);

        // Get the object.
        $result = $s3Client->getObject([
            'Bucket' => $bucket,
            'Key'    => $keyname

        // Display the object in the browser.
        $type = $result['ContentType'];
        $size = $result["ContentLength"];
        header('Content-Length: ' . $size);
        echo $result['Body'];

        // Alternatively, get file contents from S3 Bucket like this...
        // $data = file_get_contents('s3://'.$bucket.'/'.$keyname);
        // echo $data;

}catch(Exception $e) {
    // Only show this for testing purposes...

When you add a file to S3, you’re probably either doing it programmatically, or you’re dragging and dropping into the S3 Browser. When you want to use the file, you’ll find that you can get quite a lot of information from the getObject() method that is listed in the docs. I wanted to find the exact response for content length, so you would just look up getObject and see what is returned.

This is a very quick example. There are some pretty major things here that would be better doing them a different way. For example, when we connect to S3 with the factory() method, we should probably use the .aws/credentials file to make one or more profiles so that our secret info isn’t listed in the PHP of public part of the website.

Also, S3 can be accessed with the AWS CLI

I had a PHP website that was the last man standing on some shared hosting that was slow that didn’t have SSH access. I decided that since I would be moving the website anyway, why not try something different with it. The website in question is PHP/PDO/MySQL with no framework.

Having already tried AWS Lightsail App+OS, I wanted to experiment with the Lightsail “OS Only” option. What better thing to do than to install the Nginx web server on it? Starting from scratch with an OS Only box I would be able to take a look at another side of AWS Lightsail (without Bitnami) and also learn about Nginx and using a LEMP stack.

I created a new instance of Lightsail with Ubuntu 18.04 LTS. It was exactly the same as most VPS that only come with Linux installed. After installing Nginx on Lightsail, the version of PHP I got from by installing PHP-FPM was PHP 7.2.10.

Link… https://www.digitalocean.com/community/tutorials/how-to-migrate-from-an-apache-web-server-to-nginx-on-an-ubuntu-vps

But, we’ve jumped ahead. Let’s look at Linux…

First Things First: Linux

The first thing to do is to update and upgrade Linux. Sudo was already installed, so it’s straight into…

sudo apt-get update
sudo apt-get upgrade

I believe that while Debian is a pretty bare bones install, Ubuntu comes with a lot of stuff pre-installed such as sudo and nano, which is very convenient.


With Linux updated, we can get the next part of the LEMP stack installed. That would be Nginx (Engine-X). This is the only part of the stack that I’ve not had much experience of in the past so was most interesting for me. I was expecting it to be slightly more different to Apache than it was. Apart from the different style of the config file...

sudo apt-get install nginx

Now, magically the unique domain you have in your Lightsail console will work in your browser, giving you a page like…

Welcome to nginx!
If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

Doing some prep for when we change the DNS and point the domain name at this hosting, we should also make a config file…

The Nginx Config File

The config file should be the name of the site and should be created in /etc/nginx/sites-available. Copy the info from the default file across to the new site, changing the domain name. You can then set up a symlink to “sites-enabled”…

sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/


sudo unlink /etc/nginx/sites-enabled/default

Test the new Nginx config, then restart to load the new settings with…

sudo nginx -t
sudo systemctl reload nginx

Change permissions of the /var/www/htdocs/ folder then upload files. Convert htaccess…


Something like this will eventually need the * removing for nginx…

<Files *.inc>
Deny From All


 location ~ .inc {
deny all;

# I also had to convert the "break" to "last" on the mod_rewrite...

rewrite ^/(.*)/$ /item.php?item=$1 last;

Then, add the code to the example.com config file and test it again. Any duplicates will need commenting out with #. In the config file \.php causes an error, remove the slash.


Now, we can install PHP and MySQL to complete our LEMP stack…

sudo apt install php-fpm php-mysql 

Luckily, the site I was moving had pretty modern PHP with nothing that needed fixing at all. Uploaded a file with the function phpinfo() on it to test that PHP is working. All good!

Nginx Default Log File

index.php not working! Look at the log file…

tail /var/log/nginx/error.log

Yes, the PHP was fine, it turns out that PDO was unhappy that I hadn’t added the database yet…


Finish off getting MariaDB installed, then check it’s working…

sudo apt install mariadb-client-core-10.1
sudo apt install mariadb-server
sudo systemctl status mariadb

I was getting the error, below…

ERROR 2002 (HY000): Can’t connect to local MySQL server through socket ‘/var/run/mysqld/mysqld.sock’ (2 “No such file or directory”)

So, I did a “locate” for the my.cnf file and the “mysqld.sock” file and added this to the mysql/mariadb config file, my.cnf…

socket  = /var/run/mysqld/mysqld.sock


sudo service mysql restart

Login for the first time with sudo…

sudo mariadb -uroot

Now you can create the database and database user for the app.


SSL Encryption

Pointed domain name at the public IP address with CloudFlare. Server-side SSL encryption to follow.


Simple Password Authentication

sudo apt install apache2-utils
sudo htpasswd -c /etc/nginx/.htpasswd username

Then put the following in the “location” you waant to be protected in the config file…

auth_basic "Administrator Login";
auth_basic_user_file /etc/nginx/.htpasswd;

Then, test and restart NginX.

Force or Remove WWW

For some sites I prefer to keep the www in, so I did the opposite of this on this occasion…

server {
server_name www.example.com;
return 301 $scheme://example.com$request_uri;
server {
server_name example.com;
# […]


index.php Downloads instead of Displaying

Sometimes the index.php, or ay PHP files can start downloading instead of displaying normally in the browser. The fix for this is to pass the PHP scripts to the FastCGI server. Make sure you use the correct filepath, below is for PHP 7.2 but it will be different for different versions of PHP…

server {
listen 80;
listen [::]:80;
root /var/www/myApp;
index index.php index.html index.htm;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/run/php/php7.2-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;

Debug Mode

To show extra info in the error.log file add the word “debug” to the error_log statement…

error_log  /etc/nginx/error.log debug;

Example nginx.conf file

This file is taken from here. Shows SSL encryption…

server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
return 301 https://example.com$request_uri;
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name www.example.com;
ssl on;
ssl_certificate /root/certs/APPNAME/APPNAME_nl.chained.crt;
ssl_certificate_key /root/certs/APPNAME/ssl.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:20m;
ssl_session_tickets off;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /root/certs/APPNAME/APPNAME_nl.chained.crt;
root /srv/users/serverpilot/apps/APPNAME/public;
access_log /srv/users/serverpilot/log/APPNAME/APPNAME_nginx.access.log main;
error_log /srv/users/serverpilot/log/APPNAME/APPNAME_nginx.error.log;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-SSL on;
proxy_set_header X-Forwarded-Proto $scheme;
include /etc/nginx-sp/vhosts.d/APPNAMEd/.nonssl_conf; include /etc/nginx-sp/vhosts.d/APPNAME.d/.conf;
return 301 https://example.com$request_uri;
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name example.com;
ssl on;
ssl_certificate /root/certs/APPNAME/APPNAME_nl.chained.crt;
ssl_certificate_key /root/certs/APPNAME/ssl.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:20m;
ssl_session_tickets off;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /root/certs/APPNAME/APPNAME_nl.chained.crt;
root /srv/users/serverpilot/apps/APPNAME/public;
access_log /srv/users/serverpilot/log/APPNAME/APPNAME_nginx.access.log main;
error_log /srv/users/serverpilot/log/APPNAME/APPNAME_nginx.error.log;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-SSL on;
proxy_set_header X-Forwarded-Proto $scheme;
include /etc/nginx-sp/vhosts.d/APPNAME.d/.nonssl_conf; include /etc/nginx-sp/vhosts.d/APPNAME.d/.conf;


Website moved and working as it did before, but possibly slightly faster.

In just the short time I have been using it Nginx is already growing on me. I like the simplicity. I like that it is quite similar to Apache in some ways. And, I like the fact that it should be faster than Apache.

Not being able to use .htaccess files, and the Nginx config being different to Apache virtualhost files was not bad at all. A combination of using a htaccess-to-Nginx converter and Google/Stackoverflow has already taught me a lot of how to replicate what I might do with an .htaccess or virtualhost file with Nginx.

As expected, the “OS Only” version of AWS Lightsail was much more like a standard VPS and there was nothing too hard in setting it up and moving a site across and onto Nginx.