• Resolved verbandsbuero

    (@verbandsbuero)


    Hello,

    I have the problem that all results are crawled as an extra URL. As a result, I have thousands of unnecessary pages in my Search Console.

    How can I prevent this?

    Currently I have excluded them via Robot.txt: Disallow: /*?

    is it good?

    The page I need help with: [log in to see the link]

Viewing 4 replies - 1 through 4 (of 4 total)
  • Plugin Author Jeroen Peters

    (@jeroenpeters1986)

    Hi @verbandsbuero,

    thanks for writing me. Do you mean there are results indexed with the starting letter? The entries themselves don’t have a URL.

    What you can do to block the indexing of the specific letters is use this line in your robots.txt:

    Disallow /vereinsratgeber/gesetze-fuer-vereine/*name_directory_starts*

    I think this should work. Please let me know the results.

    Kind regards,

    Jeroen Peters

    Thread Starter verbandsbuero

    (@verbandsbuero)

    Many Thanks.Yes exactly, it’s about the parameters of the results. Since I use your plugin in other places, I would then have to
    Disallow /name_directory_starts
    correct?

    Then the indexing of the whole plugin would be blocked, right?

    Is there also a way to put these parameters canonical on the respective page?

    Plugin Author Jeroen Peters

    (@jeroenpeters1986)

    Hi @verbandsbuero,

    you are right, with Disallow /*name_directory_starts* it should have the effect you want and stop extra pages to be indexed as (kinda) duplicate.

    NameDirectory is inserted on a page, but doesn’t take it over, so it doesn’t do anything with the canonical meta of the page, sorry.

    Kind regards,

    Jeroen Peters

    Thread Starter verbandsbuero

    (@verbandsbuero)

    No problem. I’ll test it and post it

Viewing 4 replies - 1 through 4 (of 4 total)
  • The topic ‘Noindex for the results’ is closed to new replies.