Category: Rants

Dont be afraid of dependency updates

Lots of place I’ve worked at have had an irrational fear of upgrading their dependencies. I understand why, when you have something that works you don’t want to rock the boat. You want to focus on building your product, not dealing with potential runtime errors. Your ops team is happy, things are stable. Life is great.

However, just like running from your problems, freezing your dependencies is a recipe for disaster. Just like normal software maintenance, your dependencies MUST be upgraded on a regular basis. It sucks, nobody likes dealing with weird transitive issues, but without a regular upgrade schedule (every 6 months to a year at minimum) you run the risk of realizing that you can’t upgrade at all!

This is a crappy place to be in, and you know when you’re there because you try and pull in some updated library that has the features you want and/or need and everything either fails to compile, blows up at runtime, and you end up with a giant mishash of dependency exclusions and staring fruitless at dependency graphs trying to figure out “if I pick one minor version down of this and one minor version up of that maaaaybe it’ll work”

What managers and sometimes even leads don’t understand is that without staying on top of this, your cadence will slow down. The first few years you won’t notice, but if you let it stagnate, come 3, 4, or 5 years later it will be very hard to update. Without updates you’re missing on security fixes, industry standard changes, performance boosts, bug fixes, etc.

I’m not advocating for staying bleeding edge, but it is worth staying up to date. There is a difference. Bleeding edge is usually alphas and betas of libraries/products, ones that haven’t been battle tested or settled down (maybe the API is constantly in flux, looking at you angular 2.0). But stable releases should be moved onto. Your team needs to have a plan for upgrading, and isolating dependencies and changes. You need to be able to silo projects so that upgrades in one place don’t require major cascading upgrades somewhere else. If you run into that, unfortunately you have a poorly factored ecosystem that needs to be trimmed and decoupled.

And if you do find yourself in this situation, especially on something you inherited. I feel you. Trust me.

Sometimes you have to fail hard

This was a post I wrote in the middle of 2013 but never published. I wanted to share this since it’s a common story across all technologies and developers of all skill levels. Sometimes things really just don’t work. As a post-script, I did come back to this project and had a lot of success. When in doubt, let time figure it out :)


For the last couple of weeks, I’ve been trying my hand at the node.js ecosystem. I had an app idea but I wanted to make sure I chose tech stacks wisely. There’s no better way to get familiar with different stacks than to get your hands dirty and try them all, so that’s what I did.

Sometimes when you start with a new language or platform things come easy, you can blaze a burning trail writing great software. You’re like an extension of the computer, everything just works and works great. Sometimes, though, like this time for me, you putter and stall and hit roadblocks at every corner.

At times it feels like a waste of time. And at times I’m frustrated. I keep saying to myself “I’m better than this! Why am I stuck??”. But, even though I haven’t made any real progress in my app idea, I have learned tons about node.js and its accompanying workflow. Frequently, failing is where you really learn the most. It’s easy to forget that, even when you are ground to a halt, you are still learning. As long as you are learning then it’s not time wasted.

Just to prove the point, let me recant some of my recent failings.

Sequelize

I started my exploration with just regular node.js. I set up some routes and everything was cool. Then it was time to add in a backing store. At first I wanted to try using a MySql ORM (because in the past I’ve always done SQL by hand and I wanted to do something different). I tried out sequelize but found that not only did it not support transactions (and apparently transaction support in node.js is a pain since you need to write a connection pool to manage concurrent MySql connections), but it also never set up any actual foreign keys in the schema. This means you can easily corrupt your data even if the database would have prevented you from doing that. While, I did have a copy working, I didn’t particularly like the workflow so I started over. To be fair, the sequelize developer was extremely helpful and responsive on twitter, and maybe I’ll try this library again in the future (when transaction support is added).

Mongoose

After sequelize, I switched over to using mongoose with mongoDb. My only history with using document based stores is with Lucene, but in that situation I was storing actual documents and using full text searching for it. I spent a couple days reading up on mongo and mongoose and I had a quick document example up. I was able to insert and query users and related data pretty easily. Then I started to think about how to properly structure my schema for the app I wanted to write in such a way that working with the data was a pleasure, and at the same time maximized performance and throughput. This stumped me. Researching NoSQL schema design patterns led me down to understanding about linked vs embedded documents, map/reduce with mongo, populating embedded documents with mongoose, different query types and syntax, etc. Embedding too much meant that I had to search through my document to find an inner document. Linking too much meant that I had to make lots of extra data calls. Duplicating too much meant I ran the risk of out of sync data. I still haven’t quite settled on a good schema, so I took another step back and tried a different approach.

Typescript

Then I decided to give typescript a try. Since I wasn’t making good progress maybe doing things with typescript would help the ideas gel. At first, again, this was great. I got strong typing, succinct lambdas syntax, cleaner classes and functions, etc. Since I was doing so well, I thought that maybe I’d try and strongly type the parts of mongoose that I had working for my app. Here I hit another roadblock. I wanted to map a function proxy that mongoose gives you to a strongly typed class declaration. This lead me down to reading about ambient declarations in typescript, poring over the typescript spec, furiously searching every typescript stackoverflow post and blog out there. I also had to learn how modules are loaded with CommonJS. On top of all of that, I ran into a problem with running unit tests using nodeunit with typescript (though I finally did figure this out, the trick is to export a variable argument that has references to your testing class functions). At one point I even managed to crash the typescript compiler!

What’s next?

I’m at the point now where I have learned a lot, some things still don’t work, but I need to sit back and take a break. It’s disheartening failing to make progress at every turn you make, but that’s the way you learn. Without major failures you don’t come to appreciate the nuances of how things work and how things are pieced together. I’ll come back to this project in a few weeks and probably feel a whole lot better about it. In the end, what I’ve been reminded of, is that there’s no shame in sometimes failing hard. Really really hard.

Review of my first time experience with haskell editors

When you start learning a new language the first hurdle to overcome is how to edit, compile, and debug an application. In my professional career I rely heavily on visual studio and intellij IDEA as my two IDE workhorses. Things just work with them. I use visual studio for C#, C++, and F# development and IDEA for everything else (including scala, typescript, javascript, sass, ruby, and python).

IDEA had a haskell plugin but it didn’t work and caused exceptions in intellij using intellij 12+. Since my main ide’s wouldn’t work with haskell I took to researching what I could use.

Requirements

While some people frown on the idea of an IDE, I personally like them. To quote Erik Meijer

I am hooked on autocomplete. When I type xs “DOT” it is a cry for help what to pick, map, filter, flatMap. Don’t know upfront.

Not only that, but I want the build system hidden away, I want immediate type checking and error highlighting. I want code navigation, syntax highlighting, and an integrated debugger. I want all that and I don’t want to have to spend more than 30 seconds getting started. The reason being is that I have problems to solve! The focus should be on the task at hand, not fiddling with an editor.

In college I used VIM and while it was excellent at what it did, I found that it really wasn’t for me. Switching between the command mode and the edit mode was annoying, and I really just want to use the mouse sometimes. I also tried EMACS, and while it did the job, I think the learning curve was too high without enough “oo! that’s cool!” moments to keep me going. If I did a lot of terminal work (especially remote) then mastering these tools is a must, but I don’t. I know enough to do editing when I have to, but I don’t want to develop applications in that environment. When you find a good IDE (whether its a souped up editor or not) your productivity level skyrockets.

Getting Haskell working

Even though I’m on a windows machine I still like to use unix utilities. I have a collection of unix tools like ls, grep, sort, etc. Turns out this is kind of a problem when installing Haskell. You need to have the official Gnu Utils for wget, tar, and gzip otherwise certain installations won’t work. Also if you have tortoise GIT installed on your machine and in your path, some other unix utils are also available. To get Haskell working properly I had to make sure the GNU utils were first in the path before any of the other tools.

On top of that, I wasn’t able to get the cabal package for Hoogle to install on windows. About a week later, when I was trying to get Haskell up and running again I found this post which mentioned that they had just fixed a windows build problem.

Leksah

Once haskell was built, I turned to finding an IDE. My first google pointed me to Leksah, which at initially like exactly what I wanted. It had auto completion, error checking, debugging, etc. And it had a sizzlin dark theme that I thought was cool. I installed the 2013 Haskell platform (which contains GHC 7.6.3) and tried to run the Leksah build I got from their site. Being a Haskell novice, I didn’t know that you had to run Leksah that is compiled against the GHC version you have, so nothing worked! Leksah loaded, but I was immediately bombared with questions about workspaces, cabal files, modules, etc. This was overwhelming. I just wanted to type in some haskell and run it.

Once I figured that all out though, I couldn’t get the project to debug or any of the haskell modules to load. Auto complete also wouldn’t work.

Frustrated, I spent 2 days searching for solutions. I eventually realized I needed the right version of Leksah and found a beta build posted on in the Leksah google forums. Unfortunately this had other issues. I again couldn’t debug (clicking the debug button enabled and then immediately disabled), the GTK skin looked wonky, and right clicking opened menus 20 pixels above from where the mouse actually was.

Given all this, I gave up on Leksah.

SublimeText

The next step was sublime text with the sublime text haskell plugin. I was skeptical here since sublime text is really just a fancy text editor, but people swore by it so I gave it a shot. Here I had better luck getting things to work, but I was still unhappy. For a person new to Haskell, the exploratory aspect just wasn’t there. There’s no integration with GHCi for debugging, and I couldn’t search packages for what I wanted. Auto complete was faulty at best, it wouldn’t pick up functions in other files and wouldn’t prompt me half the time.

Still, it looked sharp and loaded fast. I was a big fan of the REPL plugin, but loading things into the REPL was kind of a pain. Also I liked all the hot keys, adding inferred types was easy, checking types was reasonably easy, but the lack of a good code navigation and proper auto completion irked me.

EDIT: I originally wrote this a few weeks ago even though it was just published today, and since then the REPL loading was fixed and so were a bunch of other bugs. In the end I’ve actually been using sublime text 2 for most of the small project editing, even though I liked the robustness of EclipseFP a lot.

EclipseFP

EclipseFP is where I finally hit my stride. Almost immediately everything worked. Debugging was great, code navigation, syntax highlighting, code time errors, etc. Unfortunately I couldn’t get the hoogle panel to work but the developer was incredibly responsive and worked me through the issue (and updated the plugin to work with the new eclipse version “Kepler”). I also enjoyed the fact that working in a file auto-loaded it into GHCi REPL so I could edit then test my functions quicker. On top of that, the developer recently submitted a pull request to the eclipse theme plugin so new dark themes will be available soon!

One thing I do wish is that the REPL had syntax highlighting like the sublimeText REPL did, but that’s OK.

Conclusion

In the end, while I can see how people more familiar with Haskell would choose the lightweight editor route (such as sublime), people new to the language really need a way to get up and running fast. Without that, it’s easy to get turned off from trudging through and learning a new language. A good IDE helps a user explore and automates a lot of the boring nastiness that comes with real development.

Working on a long term svn branch

I work on a reasonably small team and for the most part everyone works in trunk. But it can happen where you need to switch over to a long term feature branch (more than a week or two) that can last sometimes months. The problem here is that your branch can easily diverge from trunk. If the intent is that the feature branch will eventually become the master (trunk) then you should merge the feature branch frequently. For me, this method has worked really well.

Merging often lets you take the trunk fixes that happen and you manually resolve any conflicts as they come in. Since the feature branch is going to be the final thing (when the feature is done), svn needs to know how to deal with these conflicts. It’s much better to deal with them as they come in, rather than try to integrate a feature branch after months of work only to see an svn merge with hundreds of conflicts.

The problem with resolving those conflicts later is that contextually you can’t remember what they were doing anymore. If you have a conflict that spans 2 or 3 files, it’s easy to get lost in what needs to be discarded, what needs to be modified, and what needs to be resolved with local or repo changes. This just means that your QA is going to absolutely hate you because nobody is confident that the merge was complete: something could be missing, or a logical piece isn’t right. By merging frequently from trunk into the branch you make svn’s job easier. It knows how to resolve potential conflicts because you already did it.

You can take this one step further and do the same thing with multiple feature branches. Lets say you have a setup like this:

svn

You have two feature branches and trunk. Periodically you should merge the first branch from trunk (I do this every monday morning). Then periodically also merge the second branch from the first branch. When the first branch is done, you can easily reintegrate it.

After you integrate the first branch, you can start to merge the second branch back off of trunk

svn2

Eventually when the second branch is done, you can reintegrate it back into trunk and you won’t have any conflicts.

The important thing here is to do your due diligence in making sure all the conflicts and merges are properly done and done often. Don’t wait till the last minute to do this, it can be time consuming but it’s a lot easier to do this upfront then all at the end.

A response to “Ten reasons to not use a functional programming language”

If you haven’t read the top ten reasons to not use a functional programming language, I think you should. It’s a well written post and ironically debunks a lot of the major trepidations people have with functional languages.

But, I wanted to play devils advocate here. I read a lot of articles on functional and everyone touts a lot of the same reasons to use functional languages, and this post was no different. What these posts always lack, though, is acknowledgement that functional isn’t the end all be all of language solutions. It has plenty of problems itself, and it’s just as easy to critique it using the same ten reasons. That said, I wanted to address a few of my opinions regarding functional programming using the same format as the original article.

Reason 1: I don’t want to follow the latest fad

The authors point here is that people claim that functional is a fad, but he’s right in that it’s not. Functional has been around as long as imperative has, in fact Alonzo Church pioneered it with the concepts of lambda calculus.

That said, people don’t like functional because it’s hard to map to their mental model. I don’t know about you, but when I think of doing something 4 times I don’t think of a recursive loop with an accumulator, I think of a for loop. Making the mental jump to functional isn’t easy for everyone which is why it has been slow to adopt to the mainstream.

However, some aspects of functional are reasonably mainstream. Lambdas, options, first class functions and higher order functions are available in many languages such as Ruby, Scala, C#, JavaScript, Python, etc. Some even, like Scala, encourage immutable types! Functional isn’t a fad, but pure functional may be.

Reason 2: I get paid by the line

The point here is that functional languages are usually syntactically shorter. An example the author posts is this:

public static class SumOfSquaresHelper
{
   public static int Square(int i)
   {
      return i * i;
   }

   public static int SumOfSquares(int n)
   {
      int sum = 0;
      for (int i = 1; i <= n; i++)
      {
         sum += Square(i);
      }
      return sum;
   }
}

compared to

let square x = x * x
let sumOfSquares n = [1..n] |> List.map square |> List.sum

But that’s cheating. What if we did this instead:

public int SumOfSquares(int n)
{
    return Enumerable.Range(1, n).Select(i => i * i).Sum();
}

And

let square x = x * x
let sumOfSquares n = [1..n] 
                         |> List.map square 
                         |> List.sum

Now who has more lines? It’s all in how you see it. Granted, both are leveraging higher order functions, but most modern imperative languages support that. Comparing crappy code with good code is never a comparison. Terse code can be written in any (almost) language (sorry Java).

Reason 3: I love me some curly braces

Personally I don’t like space dependent languages since its easy to make scoping mistakes, but that notwithstanding, lets look at some Clojure:

(defn run-prep-tasks
  [{:keys [prep-tasks] :as project}]
  (doseq [task prep-tasks]
    (let [[task-name & task-args] (if (vector? task) task [task])
          task-name (main/lookup-alias task-name project)]
      (main/apply-task task-name (dissoc project :prep-tasks) task-args)))

While functional usually has less curly braces, many functional languages have a whole lot more parenthesis

Reason 4: I like to see explicit types

This is a common complaint from people who aren’t used to functional, and I can understand, because if someone asked you what the signature below does on first glance what would you say?

('State -> 'T1 -> 'T2 -> 'State) -> 'State -> 'T1 list -> 'T2 list -> 'State

Practiced functional programmers can tell its a function that takes a function (which takes a state, two items, and returns a new state) a seed state, and two lists, and returns a final state. This is the type signature of List.fold2 and its a mouthful!

Compare to the example the author gave:

public IEnumerable<IGrouping<TKey, TSource>> GroupBy<TSource, TKey>(
    IEnumerable<TSource> source,
    Func<TSource, TKey> keySelector
    )

Immediately at first glance, without caring about the types, you can tell it returns an enumerable, it takes an enumerable, and it takes a function. At the signature you can even see how the source and the selector map to each other. Reading the code you get a sense of how things work together. I won’t lie, the signature is nasty, and its verbose. Part of me wishes C# had inferred method signatures, but the other part really likes that I can glance at something and get a big picture overview of what is happening.

On top of that, its easy to make the mistake of passing a function instead of applying a function. Take this example:

apply one two

You might think that we are applying two to one, or maybe one to two, or maybe I am passing in a function called one and an argument called two, or maybe both one and two are functions and are being combined and returned as another function, or maybe I meant to curry the apply function by applying one to two like this:

apply (one two)

It’s very easy to make mistakes like this in functional, especially if the type arguments are generic enough. If the signature for apply is a 'a -> 'b -> 'c then you don’t know what you meant! Anyways, this is the complaint people have about implicit vs explicit typing.

Reason 5: I like to fix bugs

I like to fix type mismatches AND bugs.

Reason 6: I live in the debugger

I still live in the debugger. To say that a language makes it so that if your code compiles it probably works just boggles my mind. Code can compile fine but be logically completely wrong. This happens in every language! In fact, debugging in fsharp can be complex because of the pipe operator (see my other post on debugging the pipe operator).

Reason 7: I don’t want to think about every little detail

I didn’t really get this one. The author talks about how by having all the types matched up you suddenly think of all edge conditions. That’s just not true. Like I mentioned above, edge condtions are part of logical flow, not code semantics. You can have all the types match up and still have edge cases you didn’t consider.

Reason 8: I like to check for nulls

This one is fun because I’ve brought this up with a coworker to discuss before. I, personally, really like the option type, but you can still have nulls. What about

let foo = None;

Option.get foo

This results in:

System.ArgumentException was unhandled
  HResult=-2147024809
  Message=The option value was None
Parameter name: option
  Source=FSharp.Core
  ParamName=option
  StackTrace:
       at Microsoft.FSharp.Core.OptionModule.GetValue[T](FSharpOption`1 option)
       at <StartupCode$FSharpScratch>.$Print.main@() in C:\Projects\Program.fs:line 28
  InnerException: 

Oops! So, you still have to match on option discriminated unions for none which means you are still checking for some sort of empty thing. Instead of having

if(x != null){
}

You start having

match x with 
 | Some item ->
 | _ ->

I’m not saying matching is bad, just saying that its wrong to assume you get no exceptions since there are fewer nulls.

A safer design pattern is to use the maybe monad, which can easily be built into every object type in C# using extension methods. I also like the get and getOrElse pattern that Scala has, meaning that you can either get and it’s a None type or get and if it’s a None return a default. Much safer.

Reason 9: I like to use design patterns everywhere

Design patterns apply to any code of any language. To just write flat code with no organization or thought to structure is going to break large apps. Those patterns exist, I’m sure, even in functional applications. And if they don’t, I’d be skeptical of their extensiblity and robustness.

You always want to segment out 3rd party libraries behind proxies, you want to hide how things are created with factories, and you want to make sure that you interface with abstract classes and interfaces when necessary so you can inject different implementations of things. In fact, the F# team have videos showing how to do certain design patterns with F#!

Reason 10: It’s too mathematical

This is something I’ve never heard people mention when complaining about functional, but maybe it’s true. I don’t know. I can’t comment on this one.

Conclusion

I think the article is hilarious and well written, and I am a huge proponent of functional languages. But I find that some things about functional do annoy me. One of the reasons I really like f# is because you can do imperative work when you need to. That said, I think the big language winners will be languages like C# and Scala that embrace functional paradigms but also let you build with imperative.

Jon Skeet, C#, and Resharper

Today, at 1pm EST, the venerable Jon Skeet had a goto meeting webinar sponsored by JetBrains reviewing weird and cool stuff about C# and Resharper. For those who aren’t in the know, Resharper is a static analysis tool for C# that is pretty much the best thing ever. Skeet’s a great speaker and my entire team at work and I watched the webinar in our conference room while eating lunch.

I took some notes and wanted to share some of the interesting things that Jon mentioned. You can watch the video here. It’s an hour long and definitely worth viewing.

Recursive Parameterization

Skeet talked about how Resharper, and in fact the C# compiler lets you do weird stuff like this:

public class SuperContainer<T>
{
        
}

public class Container<T> : SuperContainer<Container<Container<T>>>
{
}

Even though this leads itself to recursive parameterization. Compiling this is just fine though. However, even if its not used in an assembly, if you run unit tests for that assembly you’ll get:

recursiveParameterization.

This is because unit tests usually use reflection to test your assemblies. If you don’t use a unit test, and you never access it you won’t have an issue. The problem, Skeet told me, isn’t in the C# compiler, it’s that the CLR goes, as Skeet put it, “bang

Access to modified closures

Jon talked about the problem of accessing modified closures and how it’s different in C#5 vs previous versions. The problem is described like this:

var list = new List<Action>();
foreach (var i in Enumerable.Range(0, 10))
{
    list.Add(() => Console.WriteLine(i));
}

In C# 4, the variable i is the same reference for each iteration. This means that when you capture the value in a lambda, you are closing on its reference. Running this, you are going to get

9
9
9
9
9
9
9
9
9
9

The C# 4 and earlier solution is to make sure that a new variable is created each time the iteration runs:

var list = new List<Action>();
foreach (var i in Enumerable.Range(0, 10))
{
    int tmp = i;
    list.Add(() => Console.WriteLine(tmp));
}

This gives you the right answer. But in C# 5 they changed the handling of foreach internally to give you the expected behavior: you will close on different references each time.

Covariance

Jon then spent a short bit discussing covariance between objects and how you can induce runtime failures, but resharper doesn’t warn you about it. For example, the following code is compilable, but not runnable:

string[] x = new string[10];
object[] o = x;
o[0] = 5; // breaks

Statics in generic types

The next thing Jon talked about was the Resharper warning when you have a static member variable as part of a class with generics. For example:

public class Foo<T>
{
    public static string Item { get; set; }
}

[Test]
public void StaticTest()
{
    Foo<String>.Item = "a";
    Console.WriteLine(Foo<String>.Item);

    Foo<int>.Item = "b";
    Console.WriteLine(Foo<String>.Item);
    Console.WriteLine(Foo<int>.Item);
}

Which prints out

a
a
b

Interestingly enough, Resharper 7 gives me no warning on using a static item in a templated class. The problem is really when you think you have a cache or some other static item per class, but its created once per type. This was new info to me so I thought this was pretty cool.

Virtual method call in constructor

Jon’s mentioned it on twitter before, and it was cool to see him mention it in his webinar, but you can get into very strange things when you call a virtual method from a base constructor. For example:

public class Base
{
    protected int item;

    protected Base()
    {
        VirtualFunc();                            
    }

    public virtual void VirtualFunc()
    {
        Console.WriteLine(item);
    }

}

public class Derived : Base
{
    public Derived()
    {
        item = 1;

        VirtualFunc();
    }

    public override void VirtualFunc()
    {
        if (item != 1)
        {
            throw new Exception("Should never do this");
        }
    }
}

Which prints out

System.Exception : Should never do this

Basically the base class constructor is called first, so you haven’t set the member field in the derived constructor. This means that if you run into this problem you have no way of assuring that items are initialized, even if they may be set in the constructor. Resharper, thankfully, gives you a warning about this. So follow it’s advice!

Miscellaneous c# weirdos

Skeet ended with a spattering of random C# weirdness, like being able to declare a class called var even though var is a keyword. Also, comparing of doubles can be…well, odd:

[Test]
public void CompareDouble()
{
    Console.WriteLine(double.NaN == double.NaN);

    var x = double.NaN;

    Console.WriteLine(x == x);
}

Here, Resharper says “hey, just change these values to true, they’re always going to be true”, but actually this prints out

False
False

What?

Conclusion

Jon quoted an unnamed source that describes the content of the webinar:

You are entering dark places

And I tend to agree. Thanks for the great presentation Jon and the JetBrains team.

EDIT:

Skeet tweeted his sample solution project that he used in the webinar. For more samples of C# weird/cool stuff check it out!

Advice to young engineers

I had the opportunity to represent the company I work for at an engineering networking event at the University of Maryland today catered to young engineering students of all disciplines. The basic idea was to be available for students to ask questions they don’t normally get to ask of working professionals such as “what’s the day to day like?” [lots of coffee, followed by coding all day], “what advice would you give to someone looking to get into xyz field”, etc.

Personally I had a great time being there since as an alum I felt like I could relate to their specific college experience. In this post, I wanted to share a couple of the main points that came up today during my informal discussions with the students.

Don’t be afraid of problems

I really wanted to stress this to the people I talked to today. You can’t anticipate every problem you will face in the technical world, and the only real way to succeed in a career is to accept that. The trick is, though, is to know just enough to be able to find the information you want. If you can’t find the info you want, ask someone! Unlike school, group work is encouraged. On top of that, the things you learn in school won’t prepare you for all the real world things you will encounter. All a good education really gives you is the toolset to help you find the information you need.

Not being afraid of problems means you won’t freeze and give up when you’re faced with what seems like an insurmountable issue. Break things down into smaller sets; do some research. Eventually you’ll find a solution, or at least be more informed as to why you can’t solve a certain problem and hopefully have learned from it.

Remember, nobody knows the answer to everything, and if they say they do they are lying.

Work with people you like

Almost 5/7 of the week (and sometimes more) is spent with a bunch of people at work. If you don’t like who you work with that’s a problem. I think recent graduates don’t realize that at an interview the interviewer should be selling themselves to the candidate just as much as the candidate to the interviewer. It has to be a good match, both professionally and personally. If you come out of an interview and feel like you just talked to the weirdest most uncomfortable person ever, don’t work there! It’s natural to be afraid of saying no to a job that was offered, especially when you are starting out. But, if you can afford to, it’s good to be picky. The people you work with can make all the difference between a place you consider a “job” and a place where you get to practice your hobby all day long and get paid for it.

On top of that, don’t work at a place where you won’t feel challenged. If you can find a mentor at that place that’s even better, because guided growth (especially in the beginning of a career) is invaluable.

Also, don’t worry about any stigma of jumping ship early. Leaving a job after a year isn’t a bad thing if it’s not a right fit. Find somewhere else to work. Engineering is a field that is in demand right now, but it’s also extremely competitive and constantly changing. The only way to be competitive is to always be learning.

Interests matter

For me, when I’m conducting interviews, what really sets people apart is their level of enthusiasm and interest. You can be the best engineer in the world, but if you don’t care about what you work on, or your field, you won’t do a good job. Being enthusiastic about the field you are in is important. If you care about what you do, whether its computer engineering, or biological engineering, or whatever, you should have personal projects you can show. Even just showing you’ve gone above and beyond basic classwork and done research or internships in an area goes a long way.

I don’t think it matters how big or small the personal projects are, what matters is you spent the time independently to do them. People frequently suggest contributing to open source projects, and that’s great, if you have time. But if not, small personal projects also show interest and a real drive to learn and do better.

Engineering is fun as hell

I could spend all day doling out advice, but I only had about an hour with the students. In the end, while sometimes the engineering field can be wrought with roadblocks, if you can get past them it’s super fun and gratifying to build stuff that works. Sometimes as a student it’s hard to see how all the pieces fit, but they do, and if you persevere through it a career in engineering can be extremely satisfying.