Writing your own programming language is incredibly daunting. But if you start by writing a language that evaluates very simple functions it opens your mind to the possibility of writing an entire language and gives you a better understanding of what that would entail. So let’s start there. Let’s write a programming language that adds and multiplies numbers.
What’s in a Programming Language?
1. Parsers: Functions that read input and then tell the program what to do with it
2. Data Structure: The parser puts our input into an abstract syntax which our program can then evaluate
3. Nodes: Each node on the tree represents a value or expression which can then be evaluated to give us a value
Polymorphism comes form the greek words ‘polys’ meaning much or many and ‘morphe’ meaning form or shape. In programming, it refers to the ability to use functions or methods in different ways for different objects or data types.
Having the ability to use the same method in a different way depending on data input is very useful in Ruby. It greatly decreases the need for long, ugly if statements like this: Continue reading
There’s a group of guidelines and principles to help us write clean code in object-oriented programming–commonly called SOLID–which was introduced by Robert Martin (aka Uncle Bob).
The five principles of SOLID are:
- Single Responsibility Principle
- Open-Closed Principle
- Liskov Substitution Principle
- Interface Segregation Principle
- Dependency Inversion Principle
Right now I’m going to explain the Single Responsibility Principle with a real world example. Continue reading
In my last blog post, I showed how you can estimate the processing time of your algorithm with Big-O notation. But there are easier, more concrete ways to determine how long it will take your algorithms to run. Ruby has two great tools that will help you determine bottlenecks in your code and help you decide which algorthms are the most efficient: Ruby Profiler & Benchmark Modules Continue reading
Big-O notation sounds like a scary concept. I think it’s the mathy-word ‘notation’ that makes it so, but it’s actually not that difficult to wrap your mind around.
Defining Big-O Notation
Big-O notation is used to classify algorithms by how they respond (e.g., the time it takes to process) to changes in input size. Continue reading