Toggling Transparency in iTerm2

I recently started developing full-time in Vim again because all my code has to run on remote virtual machines. I like using iTerm2 with some transparency enabled so I can see what’s going on in my browser, but it’s started giving me a headache now that I’m spending all day in my terminal. I couldn’t find a built-in hotkey for toggling transparency in iTerm2, so I cooked something up with AppleScript.

  1. Launch Automator, which comes pre-installed with your Mac.
  2. When prompted for a new document type, select “Service”.step-2
  3. In the “Actions” tab on the left, search for “applescript”.step-3
  4. Drag the “Run AppleScript” result into the window on the right.step-4
  5. Copy and paste this script into the text area, then hit the hammer icon to build it.

    tell application "iTerm"
        if the transparency of the current session of the current window > 0 then
            repeat with aWindow in windows
                tell aWindow
                    repeat with aTab in tabs of aWindow
                        repeat with aSession in sessions of aTab
                            tell aSession
                                set transparency to 0
                            end tell
                        end repeat
                    end repeat
                end tell
            end repeat
        else
            repeat with aWindow in windows
                tell aWindow
                    repeat with aTab in tabs of aWindow
                        repeat with aSession in sessions of aTab
                            tell aSession
                                set transparency to 0.3
                            end tell
                        end repeat
                    end repeat
                end tell
            end repeat
        end if
    end tell
    

    step-5

  6. Change the dropdown boxes on top so that they read “Service receives no input in iTerm”. You may have to click “Other…” in order to select iTerm.step-6
  7. Save this as “Toggle Transparency”.step-7
  8. Open System Preferences, and go to the Keyboard section, then the Shortcuts tab, then the Services category.step-8
  9. Click on “Add Shortcut” next to “Toggle Transparency”, and record a key combination. I use Cmd + Shift + U.

Try it out! Open iTerm2, and hit your key combination a few times. You can also run the script by click on iTerm2 in your menu bar and going to Services. You may want to tweak the AppleScript I provided if your default profile has transparency because my default profile does not have transparency. Just flip the transparency values of 0.3 and 0.

Algebraic Data Types in Swift

An algebraic data type is a type that’s the union of other data types. That is, it’s a type that may be one of several other types. Here’s how we would implement a linked list as an algebraic data type in Swift:

enum LinkedList<Element> {  
    case empty
    indirect case node(data: Element, next: LinkedList)
}

This defines an enum called LinkedList that might either be .empty or a .node that points to another LinkedList. There are three interesting things to note. The first is that we’ve created a generic data type, so the type of Element is declared by the consumer of LinkedList. The second is that the .node case uses LinkedList recursively, and must therefore be marked with indirect. The third is that since the .empty case has no parameters, the parenthesis may be omitted.

Here’s how we define instances of LinkedList:

let a: LinkedList<Int> = .empty  
let b: LinkedList<Int> = .node(data: 1, next: .node(data: 2, next: .empty))  

To work with an algebraic data type, we deconstruct it using pattern matching. Here’s how we would print a LinkedList:

enum LinkedList<Element>: CustomStringConvertible {  
    '' // cases omitted for brevity
    var description: String {
        switch self {
        case .empty:
        return "(end)"
        case let .node(data, next):
        return "(data), (next.description)"
        }
    }
}

let b: LinkedList<Int> = .node(data: 1, next: .node(data: 2, next: .empty))

print("b: (b)") // => b: 1, 2, (end)  

We’ve implemented the CustomStringConvertible protocol so that we can use string interpolation to print LinkedList instances. While it is possible in Swift to pattern match using an if case statement, the switch is preferable because the compiler will warn us if we’ve forgotten to handle a case. This safety is one big advantage that algebraic data types have over their classical counterparts. In a traditionally implemented linked list, you would have to remember to check if the next pointer was null to know if you were at the end. This problem gets worse as the number of cases increase in more complex data structures, such as full binary range trees with sentinels.

Note that since the description instance variable only has a getter, we do not need to use the more verbose syntax:

var description = {  
    get {
        // etc
    }
    set {
        // etc
    }
}

Our print function was useful, but in order to do interesting things we need to be able to modify algebraic data types. Rather than mutate the existing data structure, we’ll return a new data structure that represents the result after the requested operation. Since we’re not going to mutate the original data structure, we’ll follow the Swift 3 naming convention of using a gerund for our methods. Here’s how we would add an .inserting method to LinkedList:

enum LinkedList<Element>: CustomStringConvertible {  
    // cases and "description" omitted for brevity
    func inserting(e: Element) -> LinkedList<Element> {
        switch self {
            case .empty:
            return .node(data: e, next: .empty)
            case .node:
            return .node(data: e, next: self)
        }
    }
}

let c = b.inserting(e: 0)  
print("c: (c)") // => c: 0, 1, 2, (end)  

The key is that we’re returning a new LinkedList that represents the result after the insertion. Notice how in the .node case, we do not need to pattern match on .node(data, next) because data and next are not needed in order to construct the new .node; we can simply use self as the next: node.

Finally, let’s implement the classic “reverse a linked list” interview question using our algebraic data type:

enum LinkedList<Element>: CustomStringConvertible {  
    // cases, "description", and "insert" omitted for brevity
    func appending(_ e: Element) -> LinkedList<Element> {
        switch self {
            case .empty:
            return .node(data: e, next: .empty)
            case let .node(oldData, next):
            return .node(data: oldData, next: next.appending(e))
        }
    }

    func reversed() -> LinkedList<Element> {
        switch self {
            case .empty:
            return self
            case let .node(data, next):
            return next.reversed().appending(data)
        }
    }
}

print("reversed c: (c.reversed())") // => reversed c: 2, 1, 0, (end)  

I’ll leave it as an exercise to the reader to figure out what the running time for this algorithm is.

Here’s the Playground on GitHub

A Faster Horse: The Future Of Web Development

Let’s talk about the rise and fall of technologies. There’s a neat graphical representation of the life-cycle of a technology called an S-Curve:

s-curve.jpeg

JavaScript is enjoying tremendous success right now, so I’m not going to bore you with the metrics. Exponential growth is fantastic, and as someone who uses JavaScript at almost every layer of the tech stack, I couldn’t be happier for my community. It’s certainly made my job as a web developer much easier.

Now, what about the future of JavaScript are you most excited about? I can think of a few off the top of my head:

  • New language features: ES6 generators, template strings, SIMD, …
  • Package manager/module ecosystem upgrades: parameterized scripts, private repositories, …
  • Framework updates: Angular 2, koa, …

Now, these are all quite exciting. New language features let us write more expressive code, and do faster computations. New frameworks help us write more robust applications. npm is amazing, and it is certainly the innovation that made node so successful, so any improvement to it is just icing on the cake.

However, these are all incremental improvements. We’re still sliding up the same S-Curve, and we’re going to reach maturity eventually because all technologies experience diminishing returns. You’re optimistic to a fault if you think that HTML+CSS+JavaScript is the holy grail of web development, and that we can’t do better. As much as we love our tools, we have to accept that they are far from perfect.

multiple-s-curves.jpeg

S-Curves don’t exist in isolation. Something else is on the horizon, and it’s not going to be an incremental improvement. This is why I think it was fantastic that TJ made a high-profile jump to a different S-Curve. Go has its own set of problems, but that’s not the point; he recognized the limits of the tools at hand, and tried something different. It’s far easier to pour more effort into the tools you are already familiar with than it is to try something completely different.

Pick any technology out there and you’ll find someone who can wax poetic about how its better than what you’re using right now. It doesn’t matter if you think they’re right or wrong, listen like you’re wrong, because eventually, you will be wrong. Eventually, you will be the cranky administrator who still believes that JSP is the holy grail of web development. Eventually, you will have to do something insane like write a new VM for the aging technology you’re locked into. Eventually, you will still be concatenating strings like this when everyone else is using the + operator.

What is next-generation web development going to look like? I don’t know, but I do have a small wish list:

  • Lower-level access to rendering: Lose the HTML/CSS, and go straight to the canvas
  • Multithreading support: We’re close to the limit of single-core performance, and even cell phones have multiple cores now
  • Lower-level access to memory: This is a compliment to multithreading, and its nice to not have to rely on garbage collection if you know what you’re doing
  • Static verification: This should be an option for applications where correctness is important
  • Better error handling: This is a real pain in node right now

What do you want to see next?

web-dev-s-curves.jpeg

Requiring Modules in CouchDB

Thanks to node.js, using the same code on the server and client is easier than ever. What about running application code in the database? If you have a function that is expensive to compute, but always returns the same result for the same input, then you could get a significant speedup by having your database cache the results of the computation ahead of time. This is especially true if you’re using CouchDB, which uses incremental MapReduce to make view computation very efficient.

Our realtime app at Getable is backed by CouchBase, a derivative of CouchDB. CouchBase and CouchDB share the same replication protocol, but differ in several important ways. One such difference is that CouchBase does not support require, while CouchDB does. One major caveat with CouchDB’s require is that you have to push the modules you want up as design documents. Since forcing humans to resolve dependency trees is outlawed in the Geneva Conventions, we need a better way of doing this.

Enter Browserify’s standalone option. We can use it to create a string of code that is easily prepended to the map function in our views. Browserify’s standalone option takes a string, which is the name of the exported module. Critically, the export will be added to the global object if require is not available, which is the case in CouchBase views. Therefore, if you create a browserify bundle with the {standalone: ‘mymodule’} option and prepend that string to your map function, you will now have global.mymodule available for use. The one gotcha is that the global object does not exist in CouchBase views either, so it must be initialized ahead of the bundle.

Create an entry script that just exports the module you want:

// entry.js
module.exports = require(‘mymodule’)  

Then bundle it with the standalone option and initialize global ahead of time:

// bundler.js
var bundle = new browserify({standalone: ‘mymodule’})  
bundle.add(‘entry.js’)  
bundle.bundle(function (err, src) {  
  var prelude = ‘var global={};’ + src.toString()
})

Now, if you have a map function, you can just insert this prelude right after the function header:

function mapFunction (doc) {  
  // prelude goes here!
  emit(doc._id, global.mymodule(doc))
}

As an implementation note, we have a small script that uses Function.toString() to manage our design documents. It turns our map functions into strings, searches for the use of application logic, and browserifies the appropriate standalone bundle for each function. It’s less prone to failure than manual updates, and makes the experience just a bit more magical.


The “One Year Later” update: we’ve seen vastly improved performance by pushing these standalone bundles up under the lib key.

Jankproof Javascript

Javascript is getting faster all the time, but things like long lists of complex cells are always going to be expensive to compute. To solve this, I wrote the unjank module last week. It helps you do expensive things in Javascript without causing the user experience to suffer.

I’m happy to report that after a week of production use, it’s clear that this technique is a significant improvement over what we were doing before for several reasons.

Device Agnostic

It doesn’t matter how fast or slow the task is; unjank benchmarks it on-the-fly and runs it as quickly as the device will allow. This means that your application is jank-free on all devices without you having to come up with magic numbers that determine how quickly a task should run.

Smooth Scrolling

An unexpected discovery was that kinetic scrolling in Webkit works very well even if the page is getting longer during the scroll. This means that if your user is scrolling down a long list as it is being rendered with unjank, they will not perceive it as slow at all. Webkit preserves the momentum of the scroll and keeps going as the page gets longer.

Aborting Tasks

The ability to abort an ongoing task is critical because most tasks are initiated by a user action. For example, if you have two tabs that have a long list each, quickly switching between the tabs will eventually crash the application unless the rendering of the lists is aborted when the tab becomes inactive.

Conclusion

I’m going to be using unjank a lot more going forward, especially where lists are involved. I pulled up the Getable app to experience it pre-unjank, and it has that signature lagginess associated with web apps, despite our use of requestAnimationFrame. With unjank, our longest lists no longer cause the browser to stutter — a small step out of the uncanny valley of hybrid apps.