Nope, not musty books, but programming constructs.
When you learn to program in C, you’re exposed to data structures which map easily to underlying hardware. Basic data types are of a fixed size, and locating information within them is based on indexing a number of bytes into them. If you want a variable size data structure, or to find information within a structure based on something other than a count then you need to either build something complex or find a library to use.
Of course keeping mappings between data, and dealing with unpredictable and variable sized collections of data are incredibly common.
When Java came along in the mid 90′s, it dealt with this by modelling the language in the same way on the workings of the machine, but provided a bunch of high quality variable and associative data structures as part of the standard library. These have evolved over the years, and we now have some incredibly capable and flexible data structure available.
However, in 2012, the way they’re available feels increasingly archaic.
Other languages which eschew the direct mapping to the underlying hardware for developer productivity make these structures available as ‘part of the language’. Sometimes these implementations aren’t as fully-functional, but they make simple things simple, and allow leave options open for the more complex cases.
While I have sympathy with the idea of designing a language which maps cleanly to the underlying system, I don’t think it’s a decision which makes sense any more. Good developers will learn/understand how these feature work, and when it’s (in)appropriate to use them. And they won’t adopt/stick with your language when it makes simple things difficult.
Yes, Java 9, I’m pointing at you.