Fix Ubuntu Grub Boot Error

This error happens usually when the /boot folder is full and grub doesn’t update/install properly when the system is updated.

1
2
3
error: symbol 'grub_file_filters' not found.
Entering rescue mode...
grub rescue>

One way to fix this is to boot with a Live USB image, mount the boot drive, and update/install grub again.

1
2
3
4
5
6
7
mount /dev/<vg/root> /mnt
mount -t proc none /mnt/proc
mount -o bind /dev /mnt/dev
mount -t sysfs /sys /mnt/sys
chroot /mnt
grub-install /dev/sda
update-grub
Posted in Uncategorized | Leave a comment

Install Windows 10 via Linux PXE

Windows 10 setup does *not* work via PXE ram disk when the iso is directly loaded. A Pre-Environment must be loaded first for to load setup from a mapped network share where the iso image has been dumped.

This guide assumes some prerequisites:
1. working PXE environment
2. pre-built Windows PE image
3. Samba/Windows share with the dumped Windows 10 image

pxelinux config file should contain the following bit to load Windows PE.

1
2
3
label Windows PE x64
  kernel memdisk
  append iso initrd=images/WinPE_x64.iso

PE will load into a command prompt.

1
2
3
initpe
net use * \\192.168.0.220\tftpboot\images\win10
z:\setup
Posted in Uncategorized | Leave a comment

Analysis of the Intel AXXRMM4LITE iKVM Module

Intel line of server motherboards have iKVM built-in but it must be enabled with a separate licensing module that plugs into the motherboard. I got one recently and was curious as to how this “license” works.

For the AXXRMM4LITE module, it is a single SPI flash (Winbond 25X10CLNIG) with an 8-pin connector. To my surprise, the flash is completely empty. The BMC/BIOS only checks for the presence of an SPI fash.

SPI flash
8-pin connector pin-out
Posted in Uncategorized | Leave a comment

JIRA Bulk Link via API

JIRA can’t natively perform bulk actions on issue links. I needed to move some links to a different link type. Here’s a quick script in Javascript that runs with NodeJS.

It querys for all the relevant issues using JQL and then iterates over each issue and link that matches the link type.

'use strict';

const request = require('request');
const async = require('async');

const user = 'youruser';
const pass = 'yourpass';
const url = 'http://yourjira';
const api = url + '/rest/api/2/';

// find all issues of type 'work package'
request({
  auth: {
    user: user,
    pass: pass
  },
  url: api + 'search',
  method: 'post',
  json: true,
  body: {
    jql: 'project = TEST AND issuetype = "Work Package"',
    maxResults: 100,
    fields: ['issuelinks']
  }
}, (err, res, body) => {
  // console.log(JSON.stringify(body, null, 4));

  // for each link type of acceptance criteria, delete, relink as '
  async.eachSeries(body.issues, (issue, doneIssue) => {
    console.log('processing ' + issue.key);
    async.eachSeries(issue.fields.issuelinks, (link, doneLink) => {
      if (link.outwardIssue && link.outwardIssue.fields.issuetype.name === 'Acceptance Criteria' && link.type.name === 'Contains') {
        console.log('processing link '+ link.id);
        async.series([
          function (done) {
            request({
              auth: {
                user: user,
                pass: pass
              },
              url: api + 'issueLink' + '/' + link.id,
              method: 'delete',
              json: true
            }, done);
          },
          function (done) {
            request({
              auth: {
                user: user,
                pass: pass
              },
              url: api + 'issueLink',
              method: 'post',
              json: true,
              body: {
                type: {
                  name: 'Requirements'
                },
                inwardIssue: {
                  'key': issue.key
                },
                outwardIssue: {
                  'key': link.outwardIssue.key
                }
              }
            }, done);
          }
        ], doneLink);
      } else {
        doneLink();
      }
    }, doneIssue);
  }, (err) => {
    console.log('complete');
  });
});

Posted in Uncategorized | Tagged , | Leave a comment

Zabbix with Sendgrid SMTP Notification on Ubuntu

The built-in mail option seems to work out of the box, but every post seems to suggest using a script to trigger internal mail binaries for mail notifications. Those require extra dependencies and configuration. With SendGrid and the REST API, it can be simply done with a script using curl. I assume a SendGrid account and key have already been setup.

Creating the script

sudo vim /usr/lib/zabbix/alertscripts/sendgrid.sh

sendgrid.sh

#!/bin/bash
SENDGRID_API_KEY="YOURKEYHERE"

curl --request POST \
 --url "https://api.sendgrid.com/v3/mail/send" \
 --header "Authorization: Bearer $SENDGRID_API_KEY" \
 --header 'Content-Type: application/json' \
 --data "{\"personalizations\": [{\"to\": [{
\"email\": \"$1\"}]}],\"from\": {\"email\": \"[email protected]\"},\"subject\": \"$2\",\"content\": [{\"type\": \"text/plain\", \"value\": \"$3\"}]}"

Notification Testing

There is no way to test aside from triggering an actual fault, so it’s necessary to create a dummy condition and then trigger it with the zabbix_sender utility. I had to explicitly install it:

Install zabbix_sender

sudo apt install zabbix_sender

Create Action and Condition

  1. Configuration -> Actions -> Create action
  2. Select condition
  3. Add new condition (= Dummy trigger)
  4. Select the Operations tab
  5. Add new operations (user with custom media type)
  6. Save by clicking Add

Trigger

zabbix_sender --zabbix-server=127.0.0.1 --host="192.168.0.240" --key="test.timestamp" --value="${VALUE}"

De-trigger

VALUE="$(date --rfc-3339=ns)"; zabbix_sender --zabbix-server=127.0.0.1 --host="192.168.0.240" --key="test.timestamp" --value="${VALUE}"

reference: http://cavaliercoder.com/blog/testing-zabbix-actions.html

 

Posted in Uncategorized | Leave a comment

Installing oracle-java9-installer on Ubuntu Error Fix

The oracle-java9-installer, as of writing, has an old URL that doesn’t redirect properly to the right URL, which causes the installer to fail when it tries to download the binaries.

Setting up oracle-java9-installer (9b162-1~webupd8~0) ...
Using wget settings from /var/cache/oracle-jdk9-installer/wgetrc
Downloading Oracle Java 9...
--2017-05-19 04:10:54-- http://www.java.net/download/java/jdk9/archive/162/binaries/jdk-9-ea+162_linux-x64_bin.tar.gz
Resolving www.java.net (www.java.net)... 137.254.56.25
Connecting to www.java.net (www.java.net)|137.254.56.25|:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://home.java.net/download/java/jdk9/archive/162/binaries/jdk-9-ea+162_linux-x64_bin.tar.gz [following]
--2017-05-19 04:10:54-- https://home.java.net/download/java/jdk9/archive/162/binaries/jdk-9-ea+162_linux-x64_bin.tar.gz
Resolving home.java.net (home.java.net)... 156.151.59.19
Connecting to home.java.net (home.java.net)|156.151.59.19|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: http://www.oracle.com/splash/java.net/maintenance/index.html [following]
--2017-05-19 04:10:54-- http://www.oracle.com/splash/java.net/maintenance/index.html
Resolving www.oracle.com (www.oracle.com)... 184.30.70.138, 2600:1408:10:184::2d3e, 2600:1408:10:185::2d3e
Connecting to www.oracle.com (www.oracle.com)|184.30.70.138|:80... connected.
HTTP request sent, awaiting response... 503 Service Unavailable
2017-05-19 04:10:54 ERROR 503: Service Unavailable.

download failed
Oracle JDK 9 is NOT installed.

You will need to manually install the binary and run dpkg to configure it. Change the URL from http://www.java.net/download/ to http://download.java.net/.

cd /var/cache/oracle-jdk9-installer
sudo wget http://download.java.net/java/jdk9/archive/162/binaries/jdk-9-ea+162_linux-x64_bin.tar.gz
sudo dpkg --configure -a
Posted in Uncategorized | 3 Comments

Resizing Virtualbox Fixed-Size VDI Disks

As of writing (2017-04-06), there is no native tooling to resize a fixed-size VDI. Those on the internet saying they can resize one are misinformed. You will get this error message:

Progress state: VBOX_E_NOT_SUPPORTED
VBoxManage.exe: error: Resize medium operation for this format is not implemented yet!

To “resize” a fixed-size VDI, it must be cloned to a larger sized VDI.

Step 1 – Create the larger VDI and move data:

Method 1 – Using VBoxManage:

VBoxManage clonehd [old-VDI] [new-VDI] --variant Standard
VBoxManage modifyhd [VDI] --resize [megabytes]
VBoxManage clonehd [new-VDI] [newnew-VDI] --variant Fixed

Disadvantage of this method is you need to make two additional full copies of the disk.

Method 2 – Using Clonezilla:

  1. Create and attach a new, larger fixed-size VDI using the VirtualBox interface.
  2. Attach and boot with a Clonezilla ISO.
  3. Use Device-to-device setting to clone the drive. Remember to press F12 on boot to select CD-ROM.

Step 2 – Expand the underlying partitions:

  1. De-attach Clonezilla ISO and attach GParted ISO.
  2. Boot similarly and resize/move partitions as needed.
Posted in Uncategorized | 1 Comment

Thoughts on Building Serverless Web Applications with Amazon Lambda

I spent several weekends working a project using the Amazon Lambda serverless micro-architecture to see whether it was worth using for larger projects. I created a micro SaaS – https://pdfbatchfill.com – that essentially takes a bunch of rows and spit them out to fields within PDF forms. Here are my brief thought on Lambda:

Complex setup

My overall experience with Lambda was positive; however, I used ClaudiaJS which abstracts nearly all the underlying plumbing. When I first started I did try to set everything up myself through the web interface and I found it overwhelming with the amount of options available due to the fact that Lambda by itself is a generic application “container”. A lot of glue is required for API Gateway and Lambda to expose the endpoints. There’s a lot of little things to do just to have a working route, so I settled on the ClaudiaJS framework to deal with them. There are others like serverless.

Comes fully-loaded

Everything just works when the application is deployed. The endpoints are automatically connected to logging within CloudWatch line by line separated by instances. Having zero need for maintenance on the infrastructure is surprisingly liberating. I am able to focus mostly on the application itself.

Using S3 to host the static page and Cloudflare to handle the DNS, I got an SSL-enabled site for free, assuming a low-traffic site of course.

Limitations due to maturity

AWS services tend to start out very stripped down and Lambda is no exception. One issue, which I encountered, was Lambda’s inability to accept binary form data. Searching the Lambda forum shows that it wasn’t quite ready for general adoption. As in, your project could be SOL and stuck if support wasn’t available for a particular function you need that is only discovered mid-way through. Luckily, I was able to workaround the issue by directly sending binary data through S3.

Posted in Uncategorized | Leave a comment

Ubuntu Linux Email Notification on Hard Disk S.M.A.R.T Errors

This is a short guide on setting up sendmail command on Ubuntu to work with smartmontools to monitor SMART statuses of drives and send email notifications on any failures.

1. Setup sendmail to relay to an external SMTP server (Gmail, hotmail, your own host).

sudo apt-get install postfix

/etc/postfix/main.cf

...
myhostname = yourhostname
relayhost = [yourhosturl]:587
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_security_options = noanonymous
smtp_use_tls = yes
...

/etc/postfix/sasl_passwd

[yourhosturl]:587 username:password
sudo chmod 400 /etc/postfix/sasl_passwd
sudo postmap /etc/postfix/sasl_passwd
sudo service postfix restart

2. Test sendmail.

echo -e "Subject: it works\nYAY!" | sendmail [email protected] \
-F yourhostname

3. Setup smartmontools to monitor drives and send notifications on failure.

sudo apt-get install smartmontools

/etc/smartd.conf

/dev/sda -H -l error -l selftest -f -s (S/../../1/01) -m \
[email protected] -M exec /usr/share/smartmontools/smartd-runner

/etc/default/smartmontools

start_smartd=yes
sudo service smartmontools restart

ref:
https://linux.die.net/man/5/smartd.conf
https://easyengine.io/tutorials/linux/ubuntu-postfix-gmail-smtp/

Posted in Uncategorized | Tagged | 1 Comment

Determining SSD Approximate Remaining Lifespan

I couldn’t find anything readily available that was trustworthy or free to determine the lifespan of an SSD. MTBF isn’t very useful as on-time doesn’t wear down like hard disks do. The next best thing is to get the total amount of bytes written to the drive and compare it with benchmark values, from torture tests (write until dead) in particular, that others have done and published.

The total bytes written is recorded within the SMART database of the drive. SmartMonTools is needed to read it. It is cross-platform and free.

https://www.smartmontools.org/wiki/Download

Once installed, start CMD as an administrator (assuming Windows).

C:\WINDOWS\system32>smartctl -a /dev/sda
smartctl 6.5 2016-05-07 r4318 [x86_64-w64-mingw32-win10] (sf-6.5-1)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Device Model: Crucial_CT525MX300SSD4
Serial Number: 16441483025B
LU WWN Device Id: 5 00a075 11483025b
Firmware Version: M0CR031
User Capacity: 525,112,713,216 bytes [525 GB]
Sector Size: 512 bytes logical/physical
Rotation Rate: Solid State Device
Form Factor: < 1.8 inches
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: ACS-3 T13/2161-D revision 5
SATA Version is: SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Tue Nov 29 23:36:05 2016 PST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

...
...

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
 1 Raw_Read_Error_Rate 0x002f 100 100 000 Pre-fail Always - 0
 5 Reallocated_Sector_Ct 0x0032 100 100 010 Old_age Always - 0
 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 3
 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 8
171 Unknown_Attribute 0x0032 100 100 000 Old_age Always - 0
172 Unknown_Attribute 0x0032 100 100 000 Old_age Always - 0
173 Unknown_Attribute 0x0032 100 100 000 Old_age Always - 1
174 Unknown_Attribute 0x0032 100 100 000 Old_age Always - 4
183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0
184 End-to-End_Error 0x0032 100 100 000 Old_age Always - 0
187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0
194 Temperature_Celsius 0x0022 066 042 000 Old_age Always - 34 (Min/Max 28/58)
196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0030 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0032 100 100 000 Old_age Always - 0
202 Unknown_SSD_Attribute 0x0030 100 100 001 Old_age Offline - 0
206 Unknown_SSD_Attribute 0x000e 100 100 000 Old_age Always - 0
246 Unknown_Attribute 0x0032 100 100 000 Old_age Always - 266902214
247 Unknown_Attribute 0x0032 100 100 000 Old_age Always - 8350815
248 Unknown_Attribute 0x0032 100 100 000 Old_age Always - 352002
180 Unused_Rsvd_Blk_Cnt_Tot 0x0033 000 000 000 Pre-fail Always - 1932
210 Unknown_Attribute 0x0032 100 100 000 Old_age Always - 0

We are concerned with three numbers:

  • reallocated sector count
  • sector size – 512
  • total LBA blocks written – 266902214 (in this case, it wasn’t labelled as such; in general, it’s the largest number)

If there are any reallocated sectors, that is bad news as it generally means the drive is on its last legs. Otherwise it can be considered a healthy drive. Multiplying the two numbers yields the total data written in bytes: 512 x 266902214 = 136653933568 bytes or about 127 GB

Referencing various sources on endurance testing, a TLC 240GB drive starts to degrade after 100-1000TB of writes depending on the make. There’s a wide range depending on the generation of technologies (SLC, MLC, TLC) and controllers used.

With the numbers from my SSD, pessimistically, its life is at 127GB / 100TB or 0.1%.

Posted in Uncategorized | Leave a comment