This is a very quick one, but I’ve added PHP 7.2 support to the Known vagrant build, which lives on a separate branch.
Until we wait for PHP 7.2 to enter the official ubuntu repos, this build uses an unofficial apt repo to obtain PHP 7.2.
This is a very quick one, but I’ve added PHP 7.2 support to the Known vagrant build, which lives on a separate branch.
Until we wait for PHP 7.2 to enter the official ubuntu repos, this build uses an unofficial apt repo to obtain PHP 7.2.
I have previously talked about speeding up your site by using Squid as a reverse proxy to cache served pages. This is a great thing to do, but presents a problem now that all sites are moving over to HTTPS, since for various technical reasons reverse proxies can’t really handle HTTPS.
These days the standard way of doing this seems to be using Varnish as a cache, and Squid seems to be a little “old hat”, however I have several client estates which were set up before Varnish came on the scene, so I needed a solution I could get up and running very quickly.
Thankfully, the solution is very similar whatever reverse proxy you’re using. The solution is simple, you need to install something that terminates and handles the HTTPS session before sending it to your proxy. The simplest way to do this is to install NGINX and configure it to handle HTTPS sessions.
1) Disable Apache’s handling of HTTPS (if you’ve got an existing, un-cached, HTTPS server).
2) Install the basic nginx apt-get install nginx-light
3) Configure nginx to forward to your proxy (which you have previously configured to listen on port 80)
server {
listen 443 ssl;
server_name www.example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1:80;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Port 443;
proxy_set_header Host $host;
}
}
After restarting nginx, you should be able to see https requests coming in on your squid proxy logs.
The biggest gotcha that you’re going to hit is that if you’re checking whether a request is HTTPS in your app (e.g. for automatically forwarding from insecure to secure), you’re not going to be able to use the standard protocol checks. The reason being is that HTTPS is being terminated by nginx, so by the time the session hits your app, it will not be seen as secure!
To perform such a test, you’re instead going to have to check for the X-Forwarded-Proto
header instead ($_SERVER['HTTP_X_FORWARDED_PROTO']
in PHP).
So, I’ve been doing a lot of work with Vagrant recently. Vagrant has been very handy when working in teams in order to have a common point of reference for development, especially when the application we were collaborating on requires helper services (such as Node servers and ElasticSearch) to be started.
Here are a couple of gotchas that caused me a whole bunch of headaches, hopefully this’ll make things easier for you.
In hindsight, this is a stupidly obvious, but at the time I had tunnel vision with a couple of other things didn’t get this until I had an “oh” moment.
The problem was that after provisioning, vagrant ssh
would not connect via public key. However, booting the VM by hand using VirtualBox, this would work.
When I finally picked through my VagrantFile, commenting things out until things worked again, I realised my mistake in a facepalming moment. Obvious when you think about it, when you mount your working directory at /home/vagrant
you clobber the ~/.ssh/authorized_keys
file that had been inserted by vagrant up
.
Man, I felt so dumb.
If you start getting some weird problems booting your box, you might want to try switching the base box.
I was using the Official Ubuntu 16.04 build as my base, but I was having no end of problems provisioning. More often than not, randomly through the provisioning process, the file system would become read only. I’d have to reboot and restart with the --provision
flag, often several times, before I was able to get a box built.
I switched to an unofficial 16.04 box, and these problems went away.
I’m sure there’s a root cause for this issue, I’ll investigate later, but for now, switching VMs resolved it.
The plugin vagrant-vbguest
will update the VirtualBox guest additions on the guest machine, during provisioning, if they are missing or out of date.
This sounds like a great idea, but in my experience, it seems to cause more harm than good. Often, the older guest additions will work just fine, and installing the new additions over them often breaks something.
My view now is very much “if it ain’t broke, don’t fix it!”.
If your provisioning or startup script itself calls another shell script, for example to start up custom services, you’re likely going to run into problems on Windows host machines.
Problems I’ve seen:
default: nohup:
default: failed to run command '/path/to/script.sh'
or:
default: -bash: /path/to/script.sh: /bin/bash^M: bad interpreter: No such file or directory
Both of these stem from the same cause, namely your script has been imported from the Windows host and it still has Windows line endings.
There are a number of solutions you could adopt, and the preference would be to avoid using shell scripts within scripts – vagrant should automatically convert line endings in scripts in your Vagrantfile, so try and only call scripts from there. If you must call scripts within scripts, as I had to do on a couple of occasions, you’ll need to convert line endings.
If this is a new project, you could configure git to make sure shell scripts are always binary. Or, convert line endings of your script before execution, e.g:
sed -i -e 's/\r$//' /path/to/script.sh
Hope all this helps!