std::map
and std::vector
. This would not do!"Why aren't you using the container classes that are already used in the game?" asked the lead.
"Well, this new system needs to detect mutable operations on containers and contained elements. The existing containers aren't const-correct. They also have quite a few bugs," replied the new initiate.
"That's unlikely, that code has been in use for almost 10 years now. We've shipped several titles with that code!" countered the lead. "You should be using the existing containers instead. It's impossible to parse STL error output. It's too slow for video games. We have already standardized on our own containers. Besides, we've also purchased this other technology for over $300,000.00 and they also use their own container types."
Okay, this isn't a fable or a myth or fiction. It happens. It happens more often with old codgers like your's truly that are trying to be very pragmatic about their priorities. It is often better to go with what you know than the latest, shiny new thing if it means you'll have some predictability in the schedule. Sometimes it is better to hold your tongue and pick your battles. In this case, the lead correctly noted that:
- The studio had several successes with the code already in use, and didn't want to risk introducing something unfamiliar to a team of (mostly) C programmers that just happened to be using C++
- The rest of the code extensively uses home-rolled container types, so interfacing with other, standardized, container types could prove to be problematic
- The team invested heavily in third-party technology to get a head start on development. This third-party tech used it's own container types
There are a number of problems with those arguments, however:
- By investing in third-party technology that provided another container library, a conflict had already been introduced. The home-rolled code could not interface easily with this third party technology.
- If the code was so stable, why was it, after ten years, there were still bugs in the containers? Because the programming team (few of whom had any responsibility for the container code) learned over time to work around problems by convention.
- Dealing with compiler error output for code that uses only standard C++ and C++ standard libraries is hardly an excuse to avoid it. The output is usually very explicit about what the problem is, even if the compiler vendor's implementation of the code itself wouldn't win any beauty contests. Anyone that spends any time in standard C++ can grok compiler output, which usually also leads to better programming practices (like const-correct coding). It's no more difficult to parse than the first time a programmer encounters pointers and does something that upsets the compiler.
- The performance characteristics of the STL are strictly defined in the standard. "Slow" is a myth. If the wrong algorithm is used to solve a problem, it will be slow. Most programmers have had enough school and/or experience to understand what O(N^2) means compared to O(N). The standard spells this out. If a programmer is going to use C++, getting to know the language, including its strengths, weaknesses and proper application is a requirement.
- Using the standard C++ library would actually reduce the risk to the project. The STL started life in 1980 as part of ADA, was introduced to C++ in 1994 (long before standardization), and has been thoroughly vetted by a LOT of programmers involved in the standardization process. The vendor implementations have had good coverage since the 1990's and are reasonably bug-free at this point. A library that has almost 30 years of design, hundreds of thousands of implementations use it, and is a formal part of ISO/ANSI standards for the C++ language is a pretty safe bet for use on a project compared to a home-rolled solution that has had coverage by 3 programs.
Were these reason enough to bite the bullet and embrace the STL? Sometimes logic and politics don't mix. In this case, no. Sewing cynicism or fighting a team of C programmers to move into the 21st century (or at least the last 20 years) would have been counterproductive. Game development is a a creative process that requires optimism, motivated programmers and agility. In this case, the psychological risk of demoralizing the team outweighed the real technical benefits of re-using standardized code.
Oddly enough, the rationale for using the pre-existing code was "re-use". In reality, fear of the unknown risks and NIH (Not Invented Here) played a pretty big role.
If the team consists of programmers that are generally open to new ways of working, have a generally optimistic attitude about learning new techniques, and the initial learning curve (and resulting productivity boost) outweigh risks to the schedule, it is probably a good idea to embrace the STL. This is easier early in projects than later, as some pre-existing code may need to be refactored. The time eventually saved in last-minute bug-fixing and "optimizations" that limit functionality for the next generation title is almost always worth it!
Some useful references on the STL:
Yup.
ReplyDeletewell, you could always hide your use of the STL behind a DLLs interface. ;) Not that I'd know anything about that.
ReplyDeleteAmen brother.
ReplyDeleteNice intro story, BTW ;)
Read here why you are wrong about STL in games:
ReplyDeletehttp://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2271.html