How can I increase the number of files I can open at a time?


People who ask this question invariably under-specify the question. They just say, "How can I increase the number of files I can open at a time?" without saying how they're opening them. From the operating system's point of view, the number of files you can open at a time is limited only by available resources. Call CreateFile until you drop. (This remark applies to local files. When you talk over the network, things get weirder.)

The fact that these people are asking the question, however, indicates that they're not using CreateFile to open the files. They're using some other intermediate layer, and it's that layer that is imposing some sort of limit. You'll have to investigate that layer to see what you can do about raising the limit.

[Raymond is currently away; this message was pre-recorded.]

Comments (33)
  1. gkdada says:

    Hey, I remember having to add "Files=20" (or some such number) to config.sys to increase the number of files the apps could open. May be these guys asking that question are stuck in some kind of time warp!

  2. Nathan_works says:

    Maybe they are using portable C/C++ functions that allows their code to run on other machines. Maybe it works right on those, but fails on winders, so you get lots of "MS suckz!??!oneone" complaints.

  3. Neil (SM) says:

    One would think that would be stretching the definition of "portable functions."

  4. Gabe says:

    Nathan: Are "winders" some kind of windlass or winch?

    A search for "increase open files" shows people mostly wondering about Linux and DOS, although it also shows that msvcrt has a limit of 512 files by default (which can be increated to 2048 with _setmaxstdio). I suspect that Windows users running into a limit of open files are hitting either the 512 or 2048 limit.

  5. Matt Craighead says:

    I’ve had a lot of frustrations with this on Linux.  I recall running into some problems where I was only allowed to have 4096 files open, and increasing that limit would have required the sysadmins to roll out an update to thousands of PCs.  Never on Windows — after hitting the problem on Linux, I wrote a quick Windows test app to call CreateFile in a loop, and I think I got to something like 300K files before it failed.

    The other one on Linux was, I think, 255 NFS automounts maximum.  After you hit that limit, file operations would start failing if you happened to try to access an NFS resource that wasn’t mounted yet.  Easy to run into that limit of 255 in a big multi-user environment with dozens of people logged into a single computer.

    These kinds of hard-coded limitations may have made sense in the 70s and 80s when computers were small, or in embedded systems, but I just don’t get why some of them still persist to this day.

    Windows is not blameless: how about MAX_PATH?  You can open very long paths with that \? prefix, but then you have to implement relative paths yourself.  Kind of a copout.

  6. John says:

    I think 95% of people are never going to run into problems with MAX_PATH; it’s a lot more characters than it sounds like:

    C:Documents and SettingsJohn Jacob Jingleheimer SchmidtHis name is my name tooWhenever we go outThe people always shouldThere goes John Jacob Jingleheimer Schmidtda da da da da da daRaymond ChenThe Old New ThingIgor Levicki and Norman Diamond 4ever

    I bet that this is really only a problem with other languages (i.e. German) where equivalent words and phrases are typically twice as long if not more.

    It could also be a problem if you are an organizational Nazi and have folder hierarchies that are 20 levels deep.

  7. Nathan_works says:

    Winders are what you roll down in your car when the weather is nice.

    I forget which way it went, probably a linux problem and not on winders, but had errors when I re-used a fstream.. Digging showed the answer to be "the c++ spec isn’t clear" so implementers did different things (IIRC, it was not clearing the flags when the stream was closed.. So re-opening the file with the same stream variable failed..)

    So I was angling more for "different interpretations of the standard" problem in my ‘portable code’ response.

  8. Aaron G says:

    Why exactly would one need to have thousands of files open at once?

  9. Matt Craighead says:

    I think 95% of people are never going to run into problems with MAX_PATH

    As a software developer, I’ve hit it (and a related problem: applications with maximum command line lengths or other statically sized internal strings).  Deeply nested directory structures and all.  People reporting that their build works if they sync their Perforce tree at "c:p4", but not if they use "c:develop".  That kind of thing.

    When you have to start using the "subst" command to map a subportion of your tree to a drive letter, you know you’re in big trouble.

    255 characters per path level is OK.  255 total is rather restrictive.

    Why exactly would one need to have thousands of files open at once?

    I think one of the problematic cases was repeated use of: open("/dev/zero"), mmap(), close().  mmap() holds the file descriptor open even after close(), until the last page was munmap()’d.

    More generally: I can think of a number of cases where you’d want to have thousands of open files.  A server, for one.

  10. frymaster says:

    Depends what you’re doing, and also if it’s a per-process, -user or -system limit.

    I think the linux limit is part of the user restrictions system (ie "restrict users to blah files") but I don’t remember if it’s a per-process or per-user limit.

    My university has a per-user limit of about 5 processes on the school of computing shared linux webserver.  1 of them is taken up by the ssh connection, and 1 of them by your shell.  If you try to access a man page, you run into problems because the output is run through quite a complicated pipe in order to format the pages correctly for your screen

  11. Gabe says:

    Nathan: I got it. Winders are what you use to roll down the windows.

  12. Whiskey Jones says:

    You need more than 4096 files open? Sounds like you’re doing something stupid.

  13. Dean Harding says:

    Winders is the redneck edition of Windows. (pronounced with a short "i", not long "i")

    I remember hitting the open files limit on Linux with an email server. Thousands of open files is not uncommon for an email server.

  14. Joe Butler says:

    CreateFile docs

    "In the ANSI version of this function, the name is limited to MAX_PATH characters. To extend this limit to 32,767 wide characters, call the Unicode version of the function and prepend "\?" to the path. For more information, see Naming a File.

    Windows Me/98/95:  This string must not exceed MAX_PATH characters."

    Having pointed that out, sometimes I get errors  similar to ‘path too long to delete’ in Windows Explorer when trying to delete folders – it might even be when trying to empty the Recycle Bin!  This happens where I might have copied an entire drive to a folder on another drive – and the combination of new folder name + original path probably goes over MAX_PATH.  The way around it is to delete the deeper-level folders first.

    I think for most users, though – with their ‘My  Documents’ that it would be difficult to reach a max path depth.

  15. asshat says:

    4096 files ought to be enough for anoybody!

  16. jcs says:

    Regarding MAX_PATH: This is very easy to exceed with development tools (such as J2EE application servers) that automatically create deeply nested directory trees to hold versions of code artifacts and configuration settings

  17. asshat says:

    4096 files ought to be enough for anybody!

  18. Joe Butler says:

    I’m curious about how many people would use something like fopen() or similar rather than CreateFile() to open a file.  We had to use posix compatible functions in a system I worked on "because we MIGHT want to recompile it for Linux in the future" – the project was scrapped, I think.  But we lost the control that comes along with CreateFile() – such as file locking.  And I guess you loose things such as sparse files, etc. too.  

    How common is it that people here, say, need to avoid the use of Win32 API calls in favour of posix-only?

  19. Gabe says:

    Joe, using the stdio facilities doesn’t just give you portability, it also gives you things like buffering and newline translation. If you’re already familiar with the C stdlib, why bother learning new APIs?

  20. Steve says:

    @gkdada – DUMMY! You invoked Yuhong Bao by referencing any pre-NT Microsoft O/S. If you have been here more than 5 minutes you should know better!

  21. Puckdropper says:

    In Vista’s explorer, if you have a certain number of files selected and press enter, nothing happens.  In XP, you got a warning (which was nice.)  So, this might be part of the user’s question.

  22. Worf says:

    The funny thing is, if Linux had the 4096-file limit and was restrictive, people running servers would’ve run into it. After all, you have Linux NFS (true – you can have it in kernel, but a userspace server exists), SMB, FTP, and webservers, and they seem just fine with the limit.

    I suspect though, that because clone() and friends are really, really, really cheap on Unix, calling APIs like fork() etc., set off a new per-process limit, so no one reaches the limit practically. On Windows, processes are expensive – calling CreateProcess() is quite a heavy function, so limits have to be huge since running threaded on Windows is a better idea.

    The funny thing is, there are many servers on Linux that are also threaded…

    Now, the biggest limit I’ve hit on Linux is the command line limit. We were doing a build, and we found that if your username had 3 characters, you were fine. If your username was 4 characters, the build failed. We traced this down to a kernel build command that issued a find and passed the results to xargs. If you had a 3-letter username, it produced a command line that was about 110kB in length (yes, kilobytes). If you had a 4-letter username, the command line expanded to around 145kB. Linux has a command line length limit for 128kiB.

    Also, for those of you playing with mmap() – the finest granularity for mmap() is the CPU’s page size (since mmap() relies on the MMU to produce a page fault for which the VM code handles by mapping in the file contents appropriately). On Linux, mmap() will only return a pointer that’s page aligned, too. While you can map smaller regions, mapping oddball regions in a page already clues you in since you get a pointer mapped to the beginning of a page, and you have to manually offset it yourself.

    The practical limit for mmap() would be one that involves mapping the entire process’ address space page-size blocks at a time, minus code/data/library areas. But you’ll be more likely to hit an internal kernel limit since you’re growing the internal kernel data structures and will hit some kernel limit or other soon enough.

  23. Pax says:

    @Worf set forth "We traced this down to a kernel build command that issued a find and passed the results to xargs. If you had a 3-letter username, it produced a command line that was about 110kB in length (yes, kilobytes). If you had a 4-letter username, the command line expanded to around 145kB. Linux has a command line length limit for 128kiB."

    xargs has parameters which can limit how many items are placed on each iteration of the command line.

  24. JamesW says:

    @John

    "I bet that this is really only a problem with other languages (i.e. German)"

    "It could also be a problem if you are an organizational Nazi"

    I guess you’re really screwed if you’re a German organizational Nazi.

  25. Yuhong Bao says:

    "@gkdada – DUMMY! You invoked Yuhong Bao by referencing any pre-NT Microsoft O/S. If you have been here more than 5 minutes you should know better!"

    I would have said the thing about DOS FILES= even without gkdada’s comment.

  26. Daniel Colascione says:

    Since 2.6.23, the command line length is no longer limited.

    See the comprehensive list here:

    http://www.in-ulm.de/~mascheck/various/argmax/#results

  27. quotemstr says:

    Matt, I don’t quite know what you’re talking about with regard to open/mmap/etc.

    Here is a testcase: http://pastebin.ca/1224219

    On my Linux box, I was able to reach 65524 mappings.On OpenBSD, that number increased to 262102. Also, the Linux number is *per process*.

    Also, if you’re mmap()ing /dev/zero, I bet you’re trying to implement your own memory allocator. If you want to do that, mmap() larger regions and divide them up; or just use malloc: it already uses mmap() when appropriate.

  28. Matt Craighead says:

    Matt, I don’t quite know what you’re talking about with regard to open/mmap/etc.  Here is a testcase: http://pastebin.ca/1224219

    Honestly, at this point… it’s been a while.  I don’t know what kernel the systems were running, or the exact arguments that were passed to open() and mmap().

    To make things even nuttier, the problem actually happened not running the native Linux version of the app, but running the Windows version of the app on Linux under wine (!).  The native Linux app avoided the problematic scenario in the first place.  wine’s emulation of VirtualAlloc was, shall we say, far from optimal.

    Now that I’m thinking about it more, I think it’s coming back to me.  I think rather than using /dev/zero, wine would actually create a temporary file mmap() it, and then unlink it.  (Or am I thinking of wine’s emulation of unnamed section objects?  Perhaps I am not remembering it straight after all.)  It was all rather overly elaborate for what should have been a simple operation to emulate.

    And no, it actually wasn’t for a memory allocator, it was for something else entirely… don’t ask. :)

  29. BA says:

    "I think 95% of people are never going to run into problems with MAX_PATH; it’s a lot more characters than it sounds like:

    C:Documents and SettingsJohn Jacob Jingleheimer SchmidtHis name is my name tooWhenever we go outThe people always shouldThere goes John Jacob Jingleheimer Schmidtda da da da da da daRaymond ChenThe Old New ThingIgor Levicki and Norman Diamond 4ever"

    At my last job, we routinely ran into MAX_PATH. The folder hierarchy was a disaster. It looked somthing like this (using OldCo as the company name):

    D:DataOperation ManualOldCo TechnologyOldCo Technology Customer Quotes2007 QuotesQuotes for John SmithOldCo Technology Quote for John Smith on 10.12.2007.xlsx

    Lots of repetition and lots of shortcuts to these retardedly long file names.

  30. >mem says:

    Why limit 32-bit OS support to <2^32 characters command line? It could be technically possible to support more.

  31. Yuhong Bao says:

    "Nathan: Are "winders" some kind of windlass or winch?"

    "Nathan: I got it. Winders are what you use to roll down the windows."

    It is of course really a misspelling, but I laughed when I read this!

  32. SuperKoko says:

    Linux state (2.6.23):

    Through ulimit, a process can have up to 1048576 files open at once.

    Beware: select(), due to a fundamental design flaw, has a much lower limit. FD_SETSIZE, the highest fd supported by select(), is 1024.

    Actually, the kernel has no limit… Changing the limit in header files just works, but doing so breaks binary compatibility for programs exchanging fd_set structures.

    Solution: Use poll().

  33. cyanna says:

    Actually the question has been put forward on several forums in the following scenario: in windows vista, user highlights several files (with same extension)in explorer, right clicks and chooses open.

    Result: if the number of files is 16 or less, the files open. If the number of files is more than 16 nothing happens.

    Tested with txt files when the program associated is notepad, and pictures when the associated program is picture viewer.

    Incidentally if any of the files has a different extension, even if the associated program is the same, the right click menu doesn’t even offer the "open" option (example: xlx and xlsx files with Office 2007 as associated program) This is not an Office issue as the files will open correctly if selected from within the relevant office application via File>open>browse. This behaviour is probably not related to the 16-files-limit, but much more  annoying!

Comments are closed.

Skip to main content