We all know how difficult is to manage C++ dependencies as long as a project grows and we integrate some cool features from some cool frameworks: sometimes header-only libraries are not an option, because of the increased compiling time.
Last time we’ve seen how to get started with CMake and discussed about its advantages, especially when we want to compile our code with different compilers, but what if we want to build and deliver our modules?
This year I’ve finally made it to Meeting C++ conference in Berlin, which is quite funny because I live in this beautiful city and for me the “trip” part consists in using the tram and S-Bahn for 30 minutes to get to the location (Andels Hotel, really nice one), but due work duties, I’ve never managed to attend.
A successful project, in informatics or like in any other industry, starts with planning in advance what are the resources needed to accomplish it. When you start to develop a new project, you don’t just throw code in an editor and hack around it. This actually is the case of hackathons, where you have just a few hours to put things all together and show a MVP (minimum viable product).
In my last post I’ve put in evidence the necessity of learning to learn your development environment. Think about it as a full ecosystem and that each element (compiler/IDE/build scripts) live all together and they contribute to the production of a final deliverable.
Lots of learning material about programming languages is available today, but I see there’s very little-to-none material on which tools are available for developers.
One reason could be: tools are a temporary commodity, that might be replaced in some years. That’s true, some softwares are made available and after some months or years, they’re not updated anymore or they don’t introduce new features or they are just burned out by some new brand tools.