Home Recursively counting files in a Linux directory
Reply: 16

Recursively counting files in a Linux directory

Robert Buckley
Robert Buckley Published in 2012-02-06 07:59:18Z

How can I recursively count files in a Linux directory?

I found this:

find DIR_NAME -type f ¦ wc -l

But when I run this it returns the following error.

find: paths must precede expression: ¦

Felix Reply to 2016-09-20 10:13:15Z

This should work:

find DIR_NAME -type f | wc -l


  • -type f to include only files.
  • | (and not ¦) redirects find command's standard output to wc command's standard input.
  • wc (short for word count) counts newlines, words and bytes on its input (docs).
  • -l to count just newlines.


  • Replace DIR_NAME with . to execute the command in the current folder.
  • You can also remove the -type f to include directories (and symlinks) in the count.
  • It's possible this command will overcount if filenames can contain newline characters.

Explanation of why your example does not work:

In the command you showed, you do not use the "Pipe" (|) to kind-of connect two commands, but the broken bar (¦) which the shell does not recognize as a command or something similar. That's why you get that error message.

the Tin Man
the Tin Man Reply to 2014-06-18 17:57:13Z

For the current directory:

find . -type f | wc -l
Sophy Reply to 2014-07-02 02:58:41Z

To determine how many files there are in the current directory, put in ls -1 | wc -l. This uses wc to do a count of the number of lines (-l) in the output of ls -1. It doesn't count dotfiles. Please note that ls -l (that's an "L" rather than a "1" as in the previous examples) which I used in previous versions of this HOWTO will actually give you a file count one greater than the actual count. Thanks to Kam Nejad for this point.

If you want to count only files and NOT include symbolic links (just an example of what else you could do), you could use ls -l | grep -v ^l | wc -l (that's an "L" not a "1" this time, we want a "long" listing here). grep checks for any line beginning with "l" (indicating a link), and discards that line (-v).

Relative speed: "ls -1 /usr/bin/ | wc -l" takes about 1.03 seconds on an unloaded 486SX25 (/usr/bin/ on this machine has 355 files). "ls -l /usr/bin/ | grep -v ^l | wc -l" takes about 1.19 seconds.

Source: http://www.tldp.org/HOWTO/Bash-Prompt-HOWTO/x700.html

Greg Bell
Greg Bell Reply to 2017-01-21 00:53:20Z

If you want a breakdown of how many files are in each dir under your current dir:

for i in $(find . -maxdepth 1 -type d) ; do 
    echo -n $i": " ; 
    (find $i -type f | wc -l) ; 

That can go all on one line, of course. The parenthesis clarify whose output wc -l is supposed to be watching (find $i -type f in this case).

lev Reply to 2017-08-29 10:18:45Z

You can use

$ tree

after installing the tree package with

$ sudo apt-get install tree

(on a Debian / Mint / Ubuntu Linux machine).

The command shows not only the count of the files, but also the count of the directories, separately. The option -L can be used to specify the maximum display level (which, by default, is the maximum depth of the directory tree).

Hidden files can be included too by supplying the -a option .

Santrix Reply to 2015-03-12 07:20:43Z

If you want to know how many files and sub-directories exist from the present working directory you can use this one-liner

find . -maxdepth 1 -type d -print0 | xargs -0 -I {} sh -c 'echo -e $(find {} | wc -l) {}' | sort -n

This will work in GNU flavour, and just omit the -e from the echo command for BSD linux (e.g. OSX).

BroSlow Reply to 2015-03-16 01:31:19Z

If you want to avoid error cases, don't allow wc -l to see files with newlines (which it will count as 2+ files)

e.g. Consider a case where we have a single file with a single EOL character in it

> mkdir emptydir && cd emptydir
> touch $'file with EOL(\n) character in it'
> find -type f
./file with EOL(?) character in it
> find -type f | wc -l

Since at least gnu wc does not appear to have an option to read/count a null terminated list (except from a file), the easiest solution would just be to not pass it filenames, but a static output each time a file is found, e.g. in the same directory as above

> find -type f -exec printf '\n' \; | wc -l

Or if your find supports it

> find -type f -printf '\n' | wc -l
rickydj Reply to 2015-07-09 03:52:47Z

You could try :

find `pwd` -type f -exec ls -l {} ; | wc -l
Sebastian Meine
Sebastian Meine Reply to 2015-08-29 14:53:33Z

Combining several of the answers here together, the most useful solution seems to be:

find . -maxdepth 1 -type d -print0 | xargs -0 -I {} sh -c 'echo -e $(find "{}" -printf "\n" | wc -l) "{}"' | sort -n

It can handle odd things like file names that include spaces parenthesis and even new lines. It also sorts the output by the number of files.

You can increase the number after -maxdepth to get sub directories counted too. Keep in mind that this can potentially take a long time, particularly if you have a highly nested directory structure in combination with a high -maxdepth number.

DanielK Reply to 2015-10-12 16:08:19Z

I would like to give a different approach with filtering for format. Example counts all available grub kernel modules:

ls -l /boot/grub/*.mod | wc -l

the Tin Man
the Tin Man Reply to 2017-01-04 21:12:55Z

On my computer, rsync is a little bit faster than find | wc -l in the accepted answer. For example you can count the files in /Users/joe/ like this:

[joe:~] $ rsync --stats --dry-run -ax /Users/joe/ /xxx

Number of files: 173076
Number of files transferred: 150481
Total file size: 8414946241 bytes
Total transferred file size: 8414932602 bytes

The second line has the number of files, 150,481 in the above example. As a bonus you get the total size as well (in bytes).


  • the first line is a count of files, directories, symlinks, etc all together, that's why it is bigger than the second line.
  • the --dry-run (or -n for short) option is important to not actually transfer the files!
  • the /xxx parameter can be any empty or non existing folder. Don't use / here.
  • I used the -x option to "don't cross filesystem boundaries", which means if you execute it for / and you have external hard disks attached, it will only count the files on the root partition.
Ram Reply to 2016-06-01 17:16:00Z

ls -l | grep -e -x -e -dr | wc -l

1.long list 2.filter files and dirs 3.count the filtered line no

Karl Richter
Karl Richter Reply to 2017-02-20 07:45:01Z

There are many correct answers here. Here's another!

find . -type f | sort | uniq -w 10 -c

where . is the folder to look in and 10 is the number of characters by which to group the directory.

the8472 Reply to 2017-03-05 23:14:56Z

I have written ffcnt to speed up recursive file counting under specific circumstances: rotational disks and filesystems that support extent mapping.

It can be an order of magnitude faster than ls or find based approaches, but YMMV.

nickl- Reply to 2017-08-19 23:30:32Z

With bash:

Create an array of entries with ( ) and get the count with #.

FILES=(./*); echo ${#FILES[@]}

Ok that doesn't recursively count files but I wanted to show the simple option first. A common use case might be for creating rollover backups of a file. This will create logfile.1, logfile.2, logfile.3 etc.

CNT=(./logfile*); mv logfile logfile.${#CNT[@]}

To get the count of files recursively we can still use find in the same way.

FILES=(`find . -type f`); echo ${#FILES[@]}
user128364 Reply to 2017-11-02 10:55:44Z

find -type f | wc -l

OR (If directory is current directory)

find . -type f | wc -l

You need to login account before you can post.

About| Privacy statement| Terms of Service| Advertising| Contact us| Help| Sitemap|
Processed in 0.312419 second(s) , Gzip On .

© 2016 Powered by mzan.com design MATCHINFO