Should Internal Site Search Results Pages Be Indexed?

Generally, a website's internal search results pages should not be indexed by any other search engine, including Google. Even in some cases, indexing your internal search results can be harmful.
There are two important reasons not to index internal search pages:

Crawl Budget

There are a lot of pages on the web for search engines to crawl. Search engine bots allocate a certain amount of crawls, known as the "crawl budget," to each site. Because of this, there is a limit for every website. Therefore, you should not waste your budget and time on that sort of pages.

In addition you need to optimize your website's existing crawl budget. The first step is determining which pages are really important for a search engine to crawl and index. Search result pages consist of duplicate content from your main page. As a result, you don’t want to waste their time on large pages that may contain duplicate content. This can lead to index bloat.

Note: Index bloat is when a website has unnecessary pages in the search engine index.

User Experience Problems

Indexing search result pages may lead to a harmful user experience. When users land on the wrong page in this scenario's indexed search result page, they will see irrelevant and useless content on those pages. It may increase your bounce rate and hurt the user experience.
Keep in mind that users should land on a page that’s relevant, helpful, and easy to navigate. But search result pages are not suitable for those purposes.

How do I prevent search result pages from being indexed?
Generally, there are three methods to do it. All of them come from the technical SEO side.

1. Use Noindex for search result pages

This piece of code should be on pages. The noindex method is one of the easiest ways to remove search results from indexation.
meta name="robots" content="noindex, nofollow">

2. Redirect

301 redirect internal search pages to appropriate category pages.

3. Disallow (recommended)

Add a disallow condition in the robots.txt file to block bots from crawling. Example:

User-agent: *
Disallow: /search?
Allow: /

In a nutshell, just try to prevent search result pages from indexation. You can follow one of these methods to solve the issue.