Night_Thastus 11 hours ago

Ripgrep has saved me so, so much time over the years. It's become an invaluable tool, something I install the moment I start up a new system. It's essential for navigating older codebases.

My only complaint is there are a couple of characters that the -F (treat as literal) option seems to still treat as a special character needing some kind of escape - though I don't remember which ones now.

Always glad to see it keep updating!

  • burntsushi 11 hours ago

    > My only complaint is there are a couple of characters that the -F (treat as literal) option seems to still treat as a special character needing some kind of escape - though I don't remember which ones now.

    If you have an example, I can try to explain for that specific case. But `-F/--fixed-strings` will 100% turn off any regex features in the pattern and instead will be treated as a simple literal. Where you might still need escaping is if your shell requires it.

    • cormacrelf 5 hours ago

      How about -F -regexthatlookslikeaflag? Verbatim, that errors out as the command line parsing tries to interpret it as a flag. If you don’t have -F, then you can escape the leading hyphen with a backslash in a single quoted string: '\-regex…', but then you don’t get fixed string search. And -F '\-regex…' is a fixed string search for “backslash hyphen r e g e x”. The only way is to manually escape the regex and not use -F.

      I think maybe a syntax like -F=-regex would work.

      • burntsushi 5 hours ago

        Yeah, that's a good call out. You would need `rg -F -e -pattern`.

    • kator 6 hours ago

      ripgrep has saved me so much time, I also use it now with LLMs and remind them they have ripgrep available! I added a donation on github, thanks for all your work.

    • echelon 10 hours ago

      Totally off-topic: what are the selling points of `jiff` vs chrono, time, std::time, etc.?

      Totally love your work! We've been sponsoring for awhile even though it isn't much. Thank you for all you do!

atonse 11 hours ago

rg is a tool that feels like magic. when in reality, like most things that feel like magic, it’s a result of exceptionally good engineering and dedication to improvement, and actually takes advantage of the incredible hardware we all use daily.

It’s also smithing that’s unleashed the ability of agents to explore and reason about code faster than waiting for some sort of “lsp-like” standard we probably would’ve had to build instead over time.

  • restlake 7 hours ago

    genuinely curious what smithing means in this context!

    • speerer 7 hours ago

      "something" I expect.

    • mistrial9 4 hours ago

      guess - smithing is a thing made by a smith's actions.. so, the workproduct of a skilled craftsman.

    • ninkendo 7 hours ago

      I think it’s just a typo for “something”, lol

    • sestep 7 hours ago

      I assume it's a typo from slide-to-type on a phone keyboard.

2bitencryption 14 hours ago

the ripgrep codebase is ultimate “pour a drink, settle into your coziest chair, and read some high quality software” codebase. Just click around through it and marvel.

Fethbita 14 hours ago

Just like fd, I actually enjoy using rg and like these new set of command line tools.

  • dizhn 13 hours ago

    What I like about both is their defaults without any parameters are what i would need them for 99% of the time. Huge time saver.

    rg <string>

    fd <string>

zem 12 hours ago

the one thing i'd love to see added is an "extension" flag, equivalent to -g but which treats the provided arg as an extension (so `rg -e c,h` instead of `rg -g '*.{c,h}'`). 99% of the time I use glob patterns it's to match on extension.

  • burntsushi 12 hours ago

    Have you seen the `-t/--type` flag? Your example could be written `-tc`. And for common ones that aren't in ripgrep already, you can define your own types.

    • zem 12 hours ago

      I have, and it's a very neat feature :) it just feels like extra ceremony to define my own type the first time I need a custom glob, though it would probably pay off in the long run

jstrong 12 hours ago

ripgrep is one of the main reasons I got interested in rust. it worked so well, it piqued my interest that it was written in rust. many years later, very glad about that. been using `rg` daily since then as well!

  • blux 7 hours ago

    [flagged]

    • CJefferson 5 hours ago

      I haven’t used qbasic in years, but hacking on nibbles is why I learnt qbasic, and I’d say that experience is a decent early chunk of why I’m an AI professor now. Nothing wrong with playing with languages that feature software you love!

davoneus 14 hours ago

Great tool, and incredibly easy to use. Started with it on Linux, and now use on 'doze too.

Probably the singular reason why I finally use regex as the first search option, rather than turning to it after bruting thru a search with standard wildcards.

  • linhns 13 hours ago

    It’s better than the normal grep, and there’s also the handy rg —-files.

locusofself 13 hours ago

I use ripgrep every single day of work. Whether it's in the command line or searching in vscode. Thanks burntsushi!

oever 11 hours ago

This week I wrote a small bash function that run ripgrep only on the files that are tracked by git:

    rgg() {
        readarray -d '' -t FILES < <(git ls-files -z)
        rg "${@}" "${FILES[@]}"
    }
It speeds up a lot on directories with many binary files and committed dot files. To search the dot files, -uu is needed, but that also tells ripgrep to search the binary files.

On repositories with hundreds of files, the git ls-files overhead a bit large.

  • burntsushi 11 hours ago

    Can you provide a concrete example where that's faster? ripgrep should generally already be approximating `git ls-files` by respecting gitignore.

    Also, `-uu` tells ripgrep to not respect gitignore and to search hidden files. But ripgrep will still skip binary files. You need `-uuu` to also ignore binary files.

    I tried playing with your `rgg` function. First problem occurred when I tried it on a checkout the Linux kernel:

        $ rgg APM_RESUME
        bash: /home/andrew/rust/ripgrep/target/release/rg: Argument list too long
    
    OK, so let's just use `xargs`:

        $ git ls-files -z | time xargs -0 rg APM_RESUME
        arch/x86/kernel/apm_32.c
        473:    { APM_RESUME_DISABLED,  "Resume timer disabled" },
        include/uapi/linux/apm_bios.h
        89:#define APM_RESUME_DISABLED  0x0d
    
        real    0.638
        user    0.741
        sys     1.441
        maxmem  29 MB
        faults  0
    
    And compared to just `rg APM_RESUME`:

        $ time rg APM_RESUME
        arch/x86/kernel/apm_32.c
        473:    { APM_RESUME_DISABLED,  "Resume timer disabled" },
    
        include/uapi/linux/apm_bios.h
        89:#define APM_RESUME_DISABLED  0x0d
    
        real    0.097
        user    0.399
        sys     0.588
        maxmem  29 MB
        faults  0
    
    So do you have an example where `git ls-files -z | xargs -0 rg ...` is faster than just `rg ...`?
    • oever 11 hours ago

      A checkout of my repository [0] with many pdf and audio files (20GB) is slow with -u. These data files are normally ignored because 1) they are in .gitignore and 2) they are binary.

      The repository contains CI files in .woodpecker. These are scripts that I'd normally expect to be searching in. Until a week ago I used -uu to do so, but that made rg take over 4 seconds for a search. Using -. brings the search time down to 24ms.

          git ls-files -z | time xargs -0 rg -w e23
          40ms
      
          rg -w. e23
          24ms
      
          rgg -w e23
          16ms
      
          rg -wuu e23
          2754ms
      
      To reproduce this with the given repository, fill it with 20GB of binary files.

      The -. flag makes this point moot though.

      [0] https://codeberg.org/vandenoever/rehorse

      • burntsushi 11 hours ago

        Oh I see now. I now understand that you thought you couldn't convince ripgrep to search hidden files without also searching files typically ignored by gitignore. Thus, `git ls-files`.

        Yes, now it makes sense. And yes, `-./--hidden` makes it moot. Thanks for following up!

    • EnPissant 3 hours ago

      I don't think this is the same thing as using gitignore.

      It will only search tracked files. For that it can just use the index. I would expect the index to be faster than looking at the fs for listings.

      • burntsushi 3 hours ago

        I was extremely careful with my wording. Re-quoting, with added emphasis:

        > ripgrep should generally already be approximating `git ls-files` by respecting gitignore.

        See also: https://news.ycombinator.com/item?id=45629515

        • EnPissant an hour ago

          I'm just trying to be helpful, not call you out.

          I've implemented gitignore aware file scanning before, and it was slower than git native operations when you only care about tracked files.

          It's the speed that is the part I was speaking too, not the semantics.

  • oever 11 hours ago

    After writing this comment, I read the man page again and found the -. flag which can be used instead of -uu.

    Searching in hidden files tracked by git would be great but the overhead of querying git to list all tracked files is probably significant even in Rust.

  • woodruffw 11 hours ago

    Maybe I’m missing something, but doesn’t ripgrep ignore untracked files in git by default already?

    • oever 11 hours ago

      The point is to search hidden files that are tracked by git. An example is CI scripts which are stored in places like .woodpecker, .forgejo, .gitlab-ci-yml.

      • burntsushi 11 hours ago

        One thing you might consider to make this more streamlined for you is this:

            $ printf '!.woodpecker\n!.forgejo\n!.gitlab-ci-yml\n' > .rgignore
        
        Or whatever you need to whitelist specific hidden directories/files.

        For example, ripgrep has `!/.github/` in its `.ignore` file at the root of the repository[1].

        By adding the `!`, these files get whitelisted even though they are hidden. Then `rg` with no extra arguments will search them automatically while still ignoring other hidden files/directories.

        [1]: https://github.com/BurntSushi/ripgrep/blob/38d630261aded3a8e...

        • oever 9 hours ago

          That's a great suggestion for .rgignore and ~/.rgignore.

  • kibwen 10 hours ago

    Is this faster than `git grep`?

    • oever 9 hours ago

      No, amazingly (to me) on the repo in question, `git grep` is twice as fast as `ripgrep -w.` or the custom `rgg` function.

      All are less than 100ms, so fast enough.

jiehong 7 hours ago

Recently, I found that the `—replace` option, and it’s quite nice. Along with `—type` and I felt like I missed out on nice features.

I’m happy reading releases notes more thoroughly to keep myself aware of new features.

Nice to see some better integration with jj!

vessenes 14 hours ago

rg is a first for me in that it's a CLI tool that an LLM taught me about -- it's a go-to tool for Claude and codex, and since I got most of my bash skills pre-dotcom-one-boom I'm historically just a grep user.

Anyway I'm trying to retrain the fingers these days, rg is super cool.

  • sshine 13 hours ago

    I switched to `ack` in 2017 because it handles recursive searches better.

    I didn't bother switching to `ag` when it came around because of having to retrain.

    But eventually I did switch to `rg` because it just has so many conveniences.

    I even switched to `fd` recently instead of `find` because it's easier and less typing for common use-cases.

    I've been using the terminal since 1997, so I'm happy I can still learn new things and use improved commands.

    • ahartmetz 12 hours ago

      In my case, I am still using ag because rg doesn't seem to be better enough to switch. What's the big deal with rg vs ag?

      I had a similar thing with bash vs zsh before I learned about oh-my-zsh. Nushell also seems attractive these days... the good stuff from PowerShell in a POSIX-like shell.

      • burntsushi 12 hours ago

        ripgrep is a lot faster (which you might only notice on larger haystacks), has many fewer bugs and is maintained.

        • ahartmetz 10 hours ago

          ag is plenty fast (gigabytes in a fraction of a second) for me - I'd switch in a heartbeat if that wasn't so. Any bugs, hm, I guess I just haven't run into them. Thanks for the reply though! I realize who replied here ;)

          • burntsushi 10 hours ago

            Look at ag's issue tracker. There are some very critical bugs. You might be impacted by them and not even know it.

            As for perf, it's not hard to witness a 10x improvement that you'll actually feel. On my checkout of the Linux kernel:

                $ (time rg -wi '\w+(PM_RESUME|LINK_REQ)') | wc -l
            
                real    0.114
                user    0.547
                sys     0.543
                maxmem  29 MB
                faults  0
                444
            
                $ (time ag -wi '\w+(PM_RESUME|LINK_REQ)') | wc -l
            
                real    0.949
                user    6.618
                sys     0.805
                maxmem  65 MB
                faults  0
                444
            
            Or even basic queries can have a pretty big difference. In my checkout of the Chromium repository:

                $ (time rg Openbox) | wc -l
            
                real    0.296
                user    1.349
                sys     1.950
                maxmem  71 MB
                faults  0
                11
            
                $ (time ag Openbox) | wc -l
            
                real    1.528
                user    1.849
                sys     8.285
                maxmem  29 MB
                faults  0
                11
            
            Or even more basic. You might search a file that is "too big" for ag:

                $ time ag '^\w{42}$' full.txt
                ERR: Skipping full.txt: pcre_exec() can't handle files larger than 2147483647 bytes.
            • EliMdoza 5 hours ago

              been using both for many years now, have never ran into issues or even been able to tell any difference in speed, let alone 10x

              what I notice unfortunatly, is that I often miss search results with rg becuase I forget I need to pass the additional -i flag. this has shaped my perception of rg - extra focus on performance, sub-optimal ux

              • burntsushi 4 hours ago

                Whether smart case is enabled by default (as ag does) could easily go either way. Notably,.I think having it disabled by default is a better UX. But ripgrep does have a --smart-case flag, which you can add to an alias or a ripgrep config file. It also works more consistently than ag's smart case feature, which has bugs.

                See my other comments about perf difference. And ag has several very critical bugs. And it's unmaintained.

                > or even been able to tell any difference in speed

                If you only search small amounts of data, then even a naive and very slow grep is likely just fine from a perf perspective.

    • dotancohen 13 hours ago

      Sell me on fd. I occasionally use find, mostly with the -name or -iname flags.

      • rkomorn 13 hours ago

        It feels nearly instant by comparison to find. That's been enough for me.

      • lawn 13 hours ago

        You don't have to type -name for the 1000th time.

        • dotancohen 6 hours ago

          Thanks.

          For other people, on Ubuntu install the `fd-find` package. The executable is named `fdfind` (no dash).

  • dotancohen 13 hours ago

    Through I use rg to initiate searches, my muscle memory keeps using grep after pipes.

    • WJW 11 hours ago

      Huh I hadn't even realized I did that. I think grep has the "filter in pipe" spot in my head while rg has the "search recursively in all files" spot.

      • burntsushi 11 hours ago

        I did it too, even after I initially released ripgrep. At this point, I've mostly re-trained my muscle memory to use `rg` in pipelines. (Particularly because I was careful to make sure `rg` worked just like `grep` does in pipelines.)

        I also find that combining `-o/--only-matching` and `-r/--replace` has replaced many of my uses of `sed` and `awk`.

    • kstrauser 11 hours ago

      Heh, I realized the same for myself the other day. I’ve been deliberately making myself go back and change it to rg to try to replace the muscle memory.

winrid 9 hours ago

Amazing how much faster it tends to be than my indexed search in intellij.

IlikeMadison 15 hours ago

That's what I call quality software.

rustc 11 hours ago

What is the right way to make ripgrep behave closer to `git grep`? Plain `rg` ignores files inside hidden folders like `.github`, `rg --hidden` will search `.github` but also search inside `.git`. I currently have this alias that I don't remember where I found: `rg --hidden --glob '!*/.git/*'`. Is there a better way?

I would prefer a solution that works from outside git repos, so no piping `git ls-files` into rg.

  • burntsushi 11 hours ago

    This might help: https://news.ycombinator.com/item?id=45629497

    That is, you can whitelist specific hidden files/directories.

    There is no way to tell ripgrep to "search precisely the set of tracked files in git." ripgrep doesn't read git repository state. It just looks at gitignore files and automatically ignores all hidden and binary files. So it make it work more like git, you might consider whitelisting the hidden files you want to search. To make it work exactly like git, you need to do the `git ls-files -z | xargs -0 rg ...` dance.

pseufaux 8 hours ago

> Directories containing .jj are now treated as git repositories.

So glad to see this!

kccqzy 10 hours ago

I discovered and started using the silver searcher (ag) before ripgrep existed. I don't feel a strong need to switch for marginally faster search but with different command-line switches. Am I missing some killer feature here?

  • burntsushi 9 hours ago

    Fewer bugs?

    And perf depends on your haystack size. If you have lots of data to search, it's not hard to witness a 10x difference: https://news.ycombinator.com/item?id=45629904

    As for features that ripgrep has that ag doesn't:

    * Much better Unicode support. (ag's is virtually non-existent.)

    * Pluggable preprocessors with --pre.

    * Jujutsu support.

    * ripgrep can automatically search UTF-16 data.

    * ripgrep has PCRE2 support. ag only has PCRE1 (which was EOL'd years ago).

    * ripgrep has a `-r/--replace` flag that lets you manipulate the output. I use it a lot instead of `sed` or `awk` (for basic cases) these days.

    * ripgrep is maintained.

    * ripgrep has multiline search that seemingly works much better.

    * ripgrep can search files bigger than 2GB. ag seemingly can't.

    * ag has lots of whacky bugs.

    e.g.,

        $ ag -c '\w{8,} Sherlock Holmes' sixteenth.txt
        9
        $ rg -c '\w{8,} Sherlock Holmes' sixteenth.txt
        9
        $ cat sixteenth.txt | rg -c '\w{8,} Sherlock Holmes'
        9
        $ cat sixteenth.txt | ag -c '\w{8,} Sherlock Holmes'
        1
        1
        1
        1
        1
        1
        1
        1
        1
    
    Or:

        $ printf 'foo\nbar\n' | ag 'foo\s+bar'
        $ printf 'foo\nbar\n' | rg -U 'foo\s+bar'
        foo
        bar
    
    Or:

        $ ag '\w+ Sherlock Holmes' full.txt
        ERR: Skipping full.txt: pcre_exec() can't handle files larger than 2147483647 bytes.
    
    There's probably more. But that's what comes to mind.
thdhhghgbhy 10 hours ago

For searching file contents, is there a way to start rg with no search string?

  • burntsushi 9 hours ago

    What do you mean? You could pass an empty pattern. But that will match everything.

    Maybe talk about your use case at a higher level.

    • thdhhghgbhy 9 hours ago

      Thanks for the response. I would like to use fzf with rg to search file contents with a previewer open. However when I first open fzf I don't wish to pass any argument to rg, until I start typing. Something like Telescope live_grep.

      • burntsushi 9 hours ago

        That's more a question for fzf than for ripgrep. ripgrep doesn't have any interactive mode. You give it arguments and it runs. That's it. ripgrep doesn't have any mode where it waits for user input (unless it's waiting for stdin).

        • tasuki 8 hours ago

          You could have incorporated some snark or something, but no, you're always the most helpful you can be. You're very inspirational - thank you!

          (Also like thanks for ripgrep I guess?)

          • thdhhghgbhy 7 hours ago

            >You could have incorporated some snark

            Why even say this?

      • bombela 6 hours ago

        Similar to other answers. With a nasty mix of vimscript generating shell commands for fzf to use, that's how I integrated rg and fd with "fzf.vim" in my neovim.

        https://github.com/bombela/fzf.vim.rgfd

        Nasty, but it works hey!

jcgrillo 5 hours ago

Every time I set up a new machine--work, personal, whatever--the first thing I do is set up my rust toolchain and the second is 'cargo install ripgrep'. I really enjoyed your talk at the Boston Rust meetup a few years back on finite state transducers. Thanks for these (among many more) contributions you've made both to software and my education as a programmer.

username223 4 hours ago

> new major version release that mostly has bug fixes, some minor performance improvements and minor new features.

It's sad that "bug fixes and performance improvements" has become a running joke as the explanation for software gavage, and this is even worse. "Fewer bugs and faster" is at least something most people want; "fewer bugs, faster, and random changes you didn't ask fork" is a lot less desirable.

jodedwards 13 hours ago

[flagged]

  • burntsushi 13 hours ago

    The most important part of semver is that breaking changes are indicated by incrementing the major version. semver does not say that the major version must only be incremented when a breaking change is present.

  • o_m 13 hours ago

    I don't think they claim they are using semver. Lots of companies/projects use "major" releases just for the added hype.

  • umanwizard 12 hours ago

    Semver advocates hijacked the very common and pre-existing format of versions specified by dot-separated numbers, and then declared a specific meaning for it.

    That’s nice for people who want to use that specific meaning, but it doesn’t mean that every instance of dot-separated version numbers is “semver”, nor that everyone who chooses not to follow these rules is “doing semver wrong”.

  • kortilla 13 hours ago

    People write scripts that call ripgrep. That makes the arguments an API.

    Semver is useful to indicate breaking API changes.

    Not sure if they are following semver, but that is the argument for using semver in a cli tool.

boltzmann64 14 hours ago

Has it caught up to ugrep in terms of backward compatibility and speed yet?

  • dgacmu 14 hours ago

    This seems like an unhelpful comment?

    First of all, the ugrep performance comparisons are online (and haven't been updated to compare against this version that was only released 3 days ago). So your question is answerable:

    https://github.com/Genivia/ugrep-benchmarks

    The two are very close and both are head and shoulders faster than most other options.

    And backwards compatibility is a mixed thing, not a mandatory goal. It's admirable that ugrep is trying to be a better drop-in replacement. It's also cool that ripgrep is trying to rethink the interface for improving usability.

    (I like ripgrep in part because it has different defaults than grep that work very well for my use cases, which is primarily searching through codebases. The lack of backwards compatibility goes both ways. Will we see a posix ripgrep? Probably not. Is ripgrep a super useful and user-friendly tool? Definitely.)

  • burntsushi 12 hours ago

    To back up what I said earlier, a common case for ripgrep is to search a code repository while respecting gitignore, ignoring hidden files and ignoring binary files. Indeed, this is ripgrep's default mode.

    For example, in my checkout of the Chromium repository, notice how much faster ripgrep is at this specific use case (with the right flags given to `ugrep` to make it ignore the same files):

        $ hyperfine --output pipe 'rg Openbox' 'ugrep-7.5.0 -rI --ignore-files Openbox ./'
        Benchmark 1: rg Openbox
          Time (mean ± σ):     281.0 ms ±   3.6 ms    [User: 1294.8 ms, System: 1977.6 ms]
          Range (min … max):   275.9 ms … 286.8 ms    10 runs
    
        Benchmark 2: ugrep-7.5.0 -rI --ignore-files Openbox ./
          Time (mean ± σ):      4.250 s ±  0.008 s    [User: 4.683 s, System: 2.154 s]
          Range (min … max):    4.242 s …  4.267 s    10 runs
    
        Summary
          rg Openbox ran
           15.12 ± 0.19 times faster than ugrep-7.5.0 -rI --ignore-files Openbox ./
    
    `ugrep` actually does a lot better if you don't ask it to respect gitignore files:

        $ hyperfine --output pipe 'rg -u Openbox' 'ugrep-7.5.0 -rI Openbox ./'
        Benchmark 1: rg -u Openbox
          Time (mean ± σ):     233.9 ms ±   3.3 ms    [User: 650.4 ms, System: 2081.6 ms]
          Range (min … max):   228.8 ms … 239.8 ms    12 runs
    
        Benchmark 2: ugrep-7.5.0 -rI Openbox ./
          Time (mean ± σ):     605.4 ms ±   6.4 ms    [User: 1104.1 ms, System: 2710.8 ms]
          Range (min … max):   596.1 ms … 613.9 ms    10 runs
    
        Summary
          rg -u Openbox ran
            2.59 ± 0.05 times faster than ugrep-7.5.0 -rI Openbox ./
    
    Even ripgrep runs a little faster. Because sometimes matching gitignores takes extra time. More so, it seems, in ugrep's case.

    Now ugrep is perhaps intended to be more like a POSIX grep than ripgrep is. So you could question whether this is a fair comparison. But if you're going to bring up "ripgrep catching up to ugrep," then it's fair game, IMO, to compare ripgrep's default mode of operation with ugrep using the necessary flags to match that mode.

    Repository info:

        $ git remote -v
        origin  git@github.com:nwjs/chromium.src (fetch)
        origin  git@github.com:nwjs/chromium.src (push)
        $ git rev-parse HEAD
        1e57811fe4583ac92d2f277837718486fbb98252
  • mort96 13 hours ago

    I'm so happy ripgrep has a different interface to grep. I don't typically need ripgrep's better performance, I just use it because 'rg foo' does what I want 99% of the time while 'grep foo' does what I want 1% of the time.

    • opan 13 hours ago

      This is pretty much flipped from my experience, so I'm curious if you could expand on this. I use grep a lot to filter command output or maybe search all my txt file notes at once when I can't remember which file contained something. I use rg rarely, one example in recent memory is searching the source code for the game Barony to try to find some lesser-known console commands or behaviors (like what all drops a particular spellbook and how commonly).

      Does rg work in the places grep does or is it about the type of task being done? In my examples I expect more default recursion from rg than from regular grep and I'm searching an unknown codebase with it, where as I often know my way around more or less when using regular grep.

      • burntsushi 12 hours ago

        `some-command | grep pattern` and `some-command | rg pattern` both work fine. You can chain `rg` commands in a shell pipeline just like you do `grep`.

        What the GP is suggesting is that their most common use case for grep is recursive search. That's what ripgrep does by default. With `grep`, you need the non-POSIX `-r` flag.

        The other bit that the GP didn't mention but is critical to ripgrep's default behavior is that ripgrep will ignore files by default. Specifically, it respects gitignore files, ignores hidden files and ignores binary files. IMO, this is what most people mean by "ripgrep does the right thing by default." Because ripgrep will ignore most of the stuff you probably don't care about by default. Of course, you can disable this filtering easily: `rg -uuu`. This is also why ripgrep has never been intended to be POSIX compatible, despite people whinging about "backwards compatibility." That's a goal they are ascribing to the project that I have never professed. Indeed, I've been clear since the beginning that if you want a POSIX compatible grep, then you should just use a POSIX compatible grep. The existence of ripgrep does not prevent that.

        Indeed, before I wrote ripgrep, I had a bunch of shell scripts in my ~/bin that wrapped grep for various use cases. I had one shell script for Python projects. Another for Go projects. And so on. These wrappers specifically excluded certain directories, because otherwise `grep -r` would search them. For big git repositories, this would in particular cause it to waste not only a bunch of time searching `.git`, but it would also often return irrelevant results from inside that directory.

        Once I wrote ripgrep (I had never been turned on to `ack` or `ag`), all of those shell scripts disappeared. I didn't need them any more.

        My understanding is that many other users have this same experience. I personally found it very freeing to get rid of all my little shell wrappers and just use the same tool everywhere. (`git grep` doesn't work nearly as well outside of git repositories for example. And it has, last I checked, some very steep performance cliffs.)

        Some users don't like the default filtering. Or it surprises them so much that they are horrified by it. They can use `rg -uuu` or use one of the many other POSIX greps out there.