It’s been a while I did not post anything — the last time was when I was playing with GNU make.

As the first half of 2018 is coming to an end, I started to think about what I did into these six months, and how it fitted with my priorities for the year — I do not really have what one would call “new year resolutions”, but I try to give myself priorities, and stick to those.


Broadly speaking, my top priority is to always learn something new (and hopefully useful), every day if possible. But as I tend to be attracted by many topics, having a rough idea of what to mainly learn is necessary to maintain some kind of focus. Besides learning for myself, which is related to improving, I also try to find a development priority in my job. This is important as well, as, when I am not in a teaching period, I am mostly completely independent on a daily basis (I do have an agreement on a 5-years plan with my main employer though).

So my current priorities are:

  • To carry on and develop writing material on the main topic I teach (corporate finance). I want to be able to provide free and quality material to my students, without the need for them to buy any textbook.
  • To rationalize the way I am writing, to improve the set of helper scripts I wrote to help, and to learn better the main tools I use for all that (pandoc, vim, make, python).
  • To learn a new programming language, haskell.
  • To learn about NLP or “Natural Language Processing” as I think there are numerous applications of NLP in the teaching/education fields.

Note that another high ranked priority for me is to explore topics I neglected when I was younger, and especially philosophy. The idea is to get myself a clearer “plan for living” or “philosophy of life”. I will not elaborate on this here today, as it is more a path than an objective (the travel is more important than the destination). It is also rather personal of course.


I started to write some kind of handouts or chapters for my students last year (around January 2017). The idea was to be able to provide written documents of 15 to 30 pages for all the topics I am teaching in my main course, corporate finance. Even if the objective is not (yet?) to write a book, I like the idea of viewing those as some kind of chapters: there is a logical progression in the topics, and later chapters build on earlier ones.

I am aware that there are plenty of very good finance textbooks around, and I certainly do not want to compete with their authors. Still, most of these books are pricey, some very good ones are no longer available, and I confess that after more than 25 years of experience, I felt I might bring a little something to the edifice.

Each document (or handout, or chapter) was written following these rules:

  • write in English first, translate to French later,
  • provide learning objectives in the beginning and a summary in the end,
  • give a lot of practical, as close as possible to “real life” examples,
  • provide exercises, and answers to those in the end of the document,
  • use versioning for all documents (calendar based versioning),
  • make them freely available and open source: most of the documents have a Creative Commons license.

In the course of this semester I reorganized a bit the full thing and got this structure:

  • helpers (bibliography, glossary, cheatsheets, …)
  • tools (how to use a spreadsheet, how to read financial statements, …)
  • valuation (time value of money, …)
  • financing (debt, equity)
  • risk management
  • portfolio management

And finally I wrote four new “chapters” and updated the global bibliography:

  • Finance bibliography v2018.02.1
  • Stocks valuation v2018.03.2
  • Fundamentals of risk and return in finance v2018.04.1
  • The weighted average cost of capital v2018.05.1
  • The investment decision process v2018.06.1

Yes, that is roughly one document per month. I am a slow writer, I procrastinate, I am a perfectionist, and I have time.

Finally, on the writing side, I also wrote three posts on this blog, two about GNU make and this one.


Writing helpers

As you probably know already, I write everything in pure text, using mainly pandoc markdown, but also reStructuredText and LaTeX.

To ease the process of producing the handouts and chapters referred to in the previous section, I wrote some filters, snippets and makefiles to go with pandoc.

After nearly one year, the whole repository started to be a big mess and needed some kind of refactoring. I also wanted to have a single makefile, able to generate the final pdf document from either pandoc markdown or LaTeX sources.

So I started with the re-organization and refactoring of the tools. The main idea was to anticipate on the different kind of documents I would have to produce. I ended with the following categories:

  • a chapter is a written “course” material, again, the ones referred to in the previous section,
  • slides might be produced with beamer, with the associated handout for distribution to the students,
  • a problem, some exercises and answers for either one or the others, when available,
  • and finally, a shortdoc is a 1-2 page document, for example a cheatsheet or a bibliography.

All these documents might be either public (and then available on or private: the slides and problem answers are not usually freely available. In addition, as all the sources are managed with git and the repository is public, sources might be encrypted if I want to keep them secret. The makefile producing the pdf would decrypt the source on the fly if the required private key is available (i.e. on my laptop, but not on yours).

Once the refactoring was done and things better organized, I finally could finish the makefile, which, as we have seen, can take clear or encrypted sources either in pandoc markdown or in LaTeX, and some metadata, to:

  • produce the final LaTeX source (including preamble, table of contents, last page footer etc.),
  • make the pdf version for the source,
  • install it locally (by hard-linking to a destination directory),
  • upload it to, updating the index accordingly,
  • rollback to the previous version if the upload did not go well.

Of course, as it is a makefile, it does only what is necessary: if you do not change the sources and then make the pdf, it will not be recreated if it exists already.

This makefile should be present in the directory of each source document, and is designed to take care of the sources available in this directory.

I then wrote another makefile,, which is intended to be at the root of all the sources directory, and can be used to do the above actions (make the pdf, install locally, upload or rollback) recursively for all the sources. It is not fully tested yet, but usable, and the latest versions available where uploaded with it.


About the software I use, I tried also to rationalize a bit, and to improve my workflow. The idea was to limit the number of tools, and to master those better.

A first example of this is the way I was taking and maintaining notes about everything. I have been using zim for some years, but was not fully happy with it: I could not have a copy of my notes on my phone, on the laptop I had to start zim to take any note instead of using my usual text editor (vim), its syntax, although not far from markdown, was specific, and it was a bit unstable. In addition, I noticed that I was often only collecting bookmarks on some topics, instead of taking real notes, and felt it was a bit stupid to keep a set of bookmarks somewhere else than where they belong, the web browser.

So I recently switched to another system:

  • I am back to the basics and keep all the bookmarks in my web browser (firefox), and use tags for the organization,
  • I started to use vimwiki to take notes and quick notes, using markdown as the markup and vim as the editor.
  • I installed and set up termux on my phone: this means that I can have vim and vimwiki on the phone as well, and synchronize my notes between the laptop and the phone with syncthing.

(The last steps, installing vimwiki on the phone and the synchronization of the notes through syncthing are still to be done at the time I am writing this.)

It was also the opportunity to have a look in my vim setup, the plugins I really use or forgot, and generally increase my knowledge of, and confidence with, this text editor, one of the piece of software I use the most. This of course is a work in progress, some notable improvements are already visible with the use of completion and omni-completion for example.

A second example is related to this blog: so far it is produced using the Pelican static website generator, which uses reStructuredText as the markup language for the sources. It is not so bad, as that means I use vim to edit the posts and Pelican is written in python, thus I can hack it a bit when necessary (it happened once already).

Anyway, would it not be better to use pandoc markdown again for these posts? I use this markup nearly every day (for my writings and for taking notes), and I use reStructuredText only once in a while to post here. Thus I am considering the following combination: pandoc (again) for the markdown to html rendering, and hakyll for the static website generator. It is only an idea so far, but, as hakyll is written in haskell, which I am currently learning, it makes sense. And, by the way, pandoc is written in haskell, too.


On a more personal note now, what do we have on the improving, development and learning side?


The first point in my priority list was to learn “a bit of haskell“. I actually began to read Learn You a Haskell for Great Good online in spring 2017, but quickly gave up, for many reasons: I lacked time as I started to write the aforementioned finance material, I lacked motivation as I did not have any real project in haskell, and a bunch of other wrong reasons.

So I put “learn haskell” on my priority list for this year, and it goes better. First I found a lot of resources I did not know about, and especially which is very opinionated, but interesting. I am currently finishing chapter 11 exercises of the Haskell Book, which is certainly not as good as claimed by its author, but still very usable, especially because of the numerous exercises it proposes. The various exercises I did so far from this book and other resources as well are available on

It might seem strange that a corporate finance professor, not particularly quantitative finance oriented, would like to learn yet another programming language, especially haskell, which has a reputation of being the ivory tower of category theorists. This leads to two different questions, why learn programming in the first place, and why haskell.

Why learn programming? If you are, like me, someone who is not primarily working with his hands, not regularly making something, like say, a wooden table for your living room, or your own house, or whatever, you know you miss something. You miss the pleasure of looking at something you made, genuinely proud of what you achieved with your bare hands and a few tools. I did not really produce anything hand-made recently, and, for me, the closest experience to this feeling of accomplishment because you made something is when I wrote a chapter (see above), and even more, when I successfully wrote a script or a full programming project. Learning to program as a side activity is challenging, interesting, rewarding and useful. Try it!

So, why learn haskell? I already played with the python programming language, and still have a lot to discover about it. But when I started to use pandoc (which, again, is written in haskell), and had a look at the sources, I read the strangest code I had ever seen. This of course excited my curiosity, and I investigated a bit more. Pure, lazy, functional programming, all these had to become meaningful to me. This is how it all started. Now, it makes sense to me to learn it as I like to be able to hack on the tools I use, as it makes me learn a safe and interesting way to code, as it is simply challenging.

Natural Language Processing

Two years ago I followed and completed Andrew Ng’s Stanford Machine Learning course on Coursera. I did it to get the basics of machine learning in order to be at least able to sort out the noise and the bullshit from the real information about it in my daily venturing in the news. As with cryptocurrencies, the number of self appointed experts who actually know nothing but a few buzz words is rising all the time, and I prefer to avoid them.

In addition to the idea of learning the bare minimum about it, I was also interested in finding out if some applications might be useful for myself, and especially for my teaching. I think that the difficult field of natural language processing might lead to really interesting applications in teaching: we actually teach and challenge our students by using mainly natural language (written and spoken) as a communication media. I thus put NLP in a corner of my head, then on a projects list, and finally in this year priorities. Well, I must admit that, beyond a few bookmarks and identifying good material to read and study, I did not make any progress here yet. To be honest, it would be really surprising to see any change until next year, as I will be teaching from September to the end of the year, thus my free time will reduce like the ice cube in my “cốc trà đá” (glass of iced tea) under the heat of July in Hanoi.

See you in December!