A Fixpoint of Identity

Discoveries of a Mathematician/Computer Scientist

Office hours

leave a comment »

I’m the teaching assistant for MATH116 this semester and the office hours I originally chose didn’t suit most of my students. To remedy this, I set up a doodle poll asking them which times they could do. I downloaded their responses as an Excel table and exported it .csv (comma-separated values).

Then I wrote a script to “parse” the .csv and figure out which 3 hours to choose in order to minimise the number of people who can’t come to any of them. Works pretty well!

Click here for the code

Written by krey

September 22, 2013 at 3:07 am

Posted in Uncategorized

Tagged with ,

An intuitive definition of total derivatives

leave a comment »

I’ve been working with derivatives recently and once again found that the definition of the total derivative doesn’t make too much sense.

A function f : \mathbb{R}^n \to \mathbb{R}^m is differentiable at \bf{x} iff there exists a linear map L : \mathbb{R}^n \to \mathbb{R}^m such that \lim_{\mathbf{h} \to \mathbf{0}} \frac{||(f(\mathbf{x}+\mathbf{h})-f(\mathbf{x})-L\mathbf{h})||}{||\mathbf{h}||} = 0.

The supposedly intuitive explanation is that L is the best linear approximation of f at \mathbf{x}:

f(\mathbf{x}+\mathbf{h}) = f(\mathbf{x})+L\mathbf{h}+o(||\mathbf{h}||)

The problem with this definition is that I couldn’t find a meaning for the word “best” to match the equation. Suggesting that the derivative is the solution to an optimisation problem is somewhat ironic given the fundamentality of differentiation in continuous optimisation.

Zooming in

Here’s an explanation that I found more intuitive: a function f is differentiable at \mathbf{x} if when you plot it and zoom in closer and closer at the point (\mathbf{x},f(\mathbf{x})) then it will look more and more like a line (or hyperplane in general). We shall call this being locally linear and try to formalise it.

But first, let’s look at an example (no audio):


Read the rest of this entry »

Written by krey

December 11, 2012 at 2:28 am

Posted in meh

Tagged with

Addition is commutative

leave a comment »

I have never felt this certain that addition of natural numbers is commutative! I proved it myself from scratch (including annoying steps using symmetry of equality). But more importantly it was my first proof in Agda.

So far it seems that Agda is a bit horrible to read (will probably improve once I start using optional params) and write (which should also improve once I set up agda-mode), and it took me hours to prove this theorem, but the feeling of my proof being checked and knowing that it’s correct is just spectacular!

Quite often when I hand in solutions to a problem sheet, there will be questions that I’ll be very confident about having answered well. I hand in my answers to have them marked, but I’m not expecting any corrections, I’m basically certain about my proof working. But Agda gives me a whole new level of certainty which is very exciting! Sadly it takes an enormous amount of time to translate from paper to agda but hopefully I will get better at this over time.

Click here to see the proof!

Written by krey

November 8, 2012 at 1:24 am

Posted in meh

Tagged with

Yoneda’s lemma and Cayley’s theorem

with one comment

Many sources, including Wikipedia claim that the Yoneda lemma is a generalisation of Cayley’s theorem. I was quite puzzled by this fact, and was unhappy with some of the explanations on the internet, so I decided to prove it for myself:

First, something I call Haskell-notation: I if F, G : \mathcal{C} \rightarrow \mathcal{D} are functors then the set of natural transformations between them is denoted: \forall C . F C \rightarrow G C. I also write (A \rightarrow B) for Hom(A,B).

Using this notation we state (the contravariant version of) the Yoneda lemma:

For any functor G : C^{op} \rightarrow Set we have

G A \cong \forall C . (C \rightarrow A) \rightarrow G C

where the isomorphism is given by

\Phi : \forall G,A . G A \rightarrow (\forall C . (C \rightarrow A) \rightarrow G C) \newline  (\Phi _{G,A}  x)_C  f = G f x

Now let \mathcal{C} be a group, that is a category with only one object * and where all arrows are isomorphisms, and set

G = (\rightarrow *) \newline  A = *

so that

(\Phi _{\rightarrow *, *} x)_C f = (f \rightarrow *) x = x \circ f

\Phi _{\rightarrow *,*} x is a natural transformation from (\rightarrow *) to (\rightarrow *) that has only one component (*). We define a functor K : \mathcal{C} \rightarrow Set by

K x = (\Phi _{\rightarrow *, *} x)_*

A quick glance at the definition of \Phi assures us that this is indeed a functor and we can see that K is injective on arrows (as \Phi was an isomorphism).
This allows us to view it as an injective monoid homomorphism from our group \mathcal{C} to (* \rightarrow *) \rightarrow (* \rightarrow *) (or Hom_{Set}(Hom_\mathcal{C}(*,*),Hom_\mathcal{C}(*,*)) in very formal notation).
As functors preserve isomorphisms, it is clear that the image of K is contained in Sym(* \rightarrow *), the symmetric group on the underlying set of our group.
From this we may conclude that \mathcal{C} is indeed isomorphic to a subgroup of a symmetric group.

The proof is finished, but there are some observations to be made. Firstly, most of the power of the Yoneda lemma was not used: we didn’t need the fact that \Phi was natural or that \Phi _{\rightarrow *, *} x was natural.

What was crucial however is to see how our group \mathcal{C} which was born a category could be turned into a group in Set, using the (\rightarrow) functor.

It is also worth noting that C^{op} is naturally isomorphic to C so it doesn’t matter which version of the Yoneda lemma we use.

Written by krey

July 19, 2012 at 4:09 pm

Posted in meh

Tagged with ,

Part B results

leave a comment »

I have (more or less) successfully finished my 3rd year at Oxford and it’s time to go through my exam results, and say a few words about the subjects!

Computer Science

Lambda Calculus and Types – 90%

One of my favourite subjects, but as it was my last exam, I didn’t have a lot of time to revise it. In the end however I was well-prepared having proved every little lemma in the notes and redone all the problem sheets up to types. I didn’t have much time for ‘Types’, which I learned on the morning of the exam and sure enough, I chose to answer a question on them! I am happy with my result, good subject overall.

Lectures attended: none.
Read the rest of this entry »

Written by krey

July 19, 2012 at 3:05 pm

Posted in meh

Tagged with ,

I repeat

leave a comment »

There’s less than two weeks left before exams, trials more difficult than anything I’ve ever had to face. I find myself in a destructive loop of procrastination and anxiety. Today was a pleasant day spent revisingĀ Principles of Programming Languages. Well, it was spent coding something that was more exciting than preparing for exams.

My creation is a bounded version of the programming language IND found in Harel’s book on Dynamic Logic. The idea is that rather than assigning a fixed value to a variable we assign a list of possible/necessary variables.

This is essentially a generalisation of non-deterministic programming to something quite similar to Alternating Turing Machines.

The most interesting part of writing this code was that when I wanted to quickly copy-paste some of my old parsers I came across Counter, the first programming language I implemented.

It had remarkable similarities to the code I wrote today and made me realise how little progress I’ve made in this directionšŸ˜¦

It also reminded me of the seemingly unending inspiration I had back in those times. It appears to me that regaining this inspiration could be the most important step towards a successful academic career.

Here’s a link to the code on github.

Written by krey

May 18, 2012 at 8:48 pm

Posted in meh

Tagged with ,

Becoming a Champion

leave a comment »

My girlfriend’s father enjoys a casual game of Bubble Breaker, he even got into the top 100 players on the website he frequents. Trying to impress/troll him lead me to the following challenge: become the Bubble Breaker champion.

779 points richer

This move gets me a delicious 729 + 50 = 779 points

Bubble Breaker

Rules: pop a connected set [blob] of bubbles (> 1) to make them disappear and be replaced by bubbles above. When a row is emptied, it will be removed.

Goal: remove n rows and n columns


  • popping a blob of size k: k^2
  • removing k columns in a single move: 50 \cdot k^2

Termination: the game ends when the goal is reached or when there are no blobs of size > 1


Well, as I didn’t want to spend a lot of my time playing Bubble Breaker there was an obvious plan: write a program that plays it for me. To make things a lot easier I asked a more experienced programmer, exFalso to help.

The grand design


Sensor: a scraper that runs the java applet and collects the data necessary for the controller
Controller: a (limited) depth-first search in the graph of game states armed with awesome heuristics
Actuator: play the game (in the applet) according to the ā€žbestā€ sequence of moves found by the actuator

As for heuristics we planned introduce a number of variables regarding the game state and see how they correlated with high scores.

Moar Cheating

While I was working on the controller, exFalso started working on the sensor. He wanted to use Javascript to get the applet’s position but I thought it would be a better idea to trick the applet into running outside of the browser.

This required exFalso to decompile the applet. The result was shocking: the decompiled code had sensible variable names and was well-formatted. After writing a Swing container for the applet exFalso realised something that would change everything: despite the fact that the applet interacted with the server on each move, it was possible to send an arbitrary high score without playing.

A note on premature optimisation

Prior to discovering that we could “print our own money” we spent a lot of time arguing about the controller: I thought it should be written in Haskell (rather unsurprisingly) and he pushed C++. His argument was that depth-first search was inherently stateful (as well as the whole algorithm) and that mutable linked lists are part of the obvious solution. Even after I convinced myself that this was rubbish, I still spent a lot of time thinking about the difference between using immutable arrays or lists of lists for the representation of the grid.

I think this is a rather striking example of premature optimisation: when you optimise code that won’t even be used in the final “product”.

Finally I’d like to thank Alex and Pete for their advice.

Click for code

Written by krey

September 12, 2011 at 2:20 pm


Get every new post delivered to your Inbox.