Daniel Iser
Forum Replies Created
-
@flymike – Sorry to hear you had issues.
- Your correct, priority does matter, mainly for performance of the handling. We stop processing after we find a match.
If you don’t have a lot of restrictions you can use a snippet of code to check all restrictions and bypass priority to some degree. Not the ideal solution, just something you can do. - The error is likely coming from another plugin/theme causing interference. You can try the Health Check & Troubleshooting plugin’s “Troubleshooting Mode” to test just our plugin then activate others one by one to find the culprit, but while you have only our plugin active, might as well go ahead and update your priority order to.
There is very little we can do on our end if some other code is globally hooking into the TinyMCE editor and extending it. We enqueue it the WP way, and don’t have any code related toMouseEvent.mozPressure
, our build tools would automatically generate compliant modern replacement or flag it in our code editors with highlighting and build time error notices, sl highly likely it comes from something else.
We also have a guide from one of our other plugin’s docs on using error notices in the console to try and find the problem JS file (plugin/theme): https://docs.wppopupmaker.com/article/373-checking-javascript-errors
Let us know.
REST API
Presumably the latter is because the REST API is not considered to be logged in, but it does seem odd that the same browser can view the page content but not the json REST version of it.
Unless you are passing a proper authorization bearer header with the request, or a valid JWT token in the url, the request is treated as unauthed. It doesn’t care about the wp login cookie used by front end pages.
for not logged in users, then the request is allowed.
Are you saying that a restricted page is accessible, or that the request doesn’t throw an error and shows something else?
What exactly is going wrong at the moment
That wasn’t a question of whether to address your problem, it was to make sure your attacking it the best / current way that will do the job.
It might just be that your thinking about it differently and overlooking simpler solutions that might exist now.
Knowing you want to protect X for user Y, and Z for user A we can tell you how we would do it, or better understand exactly how you keep running into issues others are not, rather than constantly chasing “possible” issues. IE it might work how you want already, just not the way you expected.
Sometimes we (users of software), don’t use it entirely the way it was intended and expecting it should just work, often over complicating things to some degree. Especially those of us who are technically proficient. I know because I do this too.
Simplest way to get to a working solution is to define a problem like this:
- Expected: How do you expect it to work?
- Outcome: How is it working now?
- Setup: What have you tried so far?
For your case though this should be specifically applied to your entire setup, mainly because you’ve defined multiple “problems”, all disparate, but somehow still related to how you want it set up. So instead of doing the list for each separate issue, do it once for your overall expectation/outcome/setup, and let us try to work it out as an entire solution instead.
@peter8nss – I think I see the issue.
We have “default” switch cases currently, I think removing them, and returning false by default on fall through (no switch case match) should resolve it efficiently.
Adding a
case 'terms': return false
, effectively does the same thing, explicitly, but I don’t think explicitly is necessary if we remove thedefault:
and fail gracefully for all non matches.Our goal is to minimize case matches, rather than match every option. Mainly for maintainability, supposing we add new contexts in the future for example.
With the introduction of Query Terms, it might be useful to have a rule which matched any post (of any type) and returned false if used to test a query term.
I like this idea, but that would have to be a new minor release (2.X) rather than a simple patch. Simple enough though.
Would be nice to have a filter that allowed REST API checking to be turned off, as many people won’t need it, or will already have something else dealing with it.
Theoretically you can already using the filter discussed previously bypassing restriction checks.
On the other hand we can’t realistically assume anything such as users won’t need it. In fact I’ll point you here as to why it was ever added in the first place: https://wpscan.com/vulnerability/4a683d85-6528-49f7-9f10-c1f59bf1db6b/
Now we dispute that it was ever a “vulnerability”, we did so at the time, in our readme we list it as a new feature instead of a security patch, and openly disputed it with both www.remarpro.com team and the original reporters.
In fact we had code that explicitly opted out of applying restrictions to the REST API from v1.0 all the way to v2.1.X, but were essentially forced to add new functionality (that nobody had actually requested).
Removing it now effectively reopens that, not realistic for us.
—
For instance “is not page…” will generate true for both “posts” and term queries!
As it should?
—
For your setup, what exactly is going wrong at the moment?
- Not a search page is a special rule type, it only applies to main query contexts, as does other specialized rules like “Is Home Page” etc. Those should have no bearing on term queries.
- What do you have set in your restrictions for the Posts in Archive & Posts in other locations additional options? These are what control how it works outside of the main query (the url visited), such as loops in widgets, footers, post meta etc.
@peter8nss – Super helpful thanks. Will get this patched.
Also if you added custom taxonomies to your pages, those would also appear and work as expected as well.
But we can’t just add rules for content types/structures that WP doesn’t recognize, that would be very confusing.
@flymike – The content rules are auto generated from content types registered within WP.
To be clear, if you install a plugin that enables categories for pages, those rules will magically appear ??.
@peter8nss – Do you happen to know how you triggered this? I have QM on 100% of the time during development and haven’t come across this, there must be a conflict or config set that were are not testing against.
Furthermore, I don’t think all the post_ids that are passed are actually post ids. Some are term_ids, so the title lookup is getting the title of a post with the same id as that term.
Very possible, the QM integration was added before TermQuery processing was added. I’ll have to rework that a bit to ensure we keep term & post lists separate.
For terms it might help to indicate the taxonomy they come from, e.g.
Column titles might also need revising to be more generic, e.g. ID, Title/Name, Type
Excellent ideas.
In this case your running into a performance optimization. By default only the first matched restriction is used (sorted by priority of restriction sets). This prevents needless extra rule checks etc.
To process all of them, in cases where a user might match more than one at a time, you can enable processing of all restrictions:
The filter there will do what your after, processing the secondary restrictions as well.
—
Keep in mind 98%+ of our users only have one restriction set up for any given user role/type match, so that is what we optimize for, but we do offer the flexibility to do it the other way as well for those that need it.
- This reply was modified 8 months ago by Daniel Iser.
Isn’t WP CLI considered to be an “admin” access as it is only available to those with access to the website server not web users.
Yes, and no. Requests are explicitly unauthenticated by default. Yes you can manipulate things and install plugins etc, but querying a list of content on a site with content restrictions should in all likelyhood respect those restrictions unless otherwise specified, less someone will say it isn’t properly restricting things.
So if you didn’t have any content restriction, then yes you should expect full results, but given you have gone through the process of adding access control, it should be respected as well.
Not hugely different than the REST API really when talking about fetching data.
In terms of this plugin’s user base, my guess is less than 0.1% of the users would realistically be using it, and of those how many honestly are using it to fetch content (not a common use case at all really).
So I’m not even sure it warrants a dedicated separate option to bypass restrictions for it, since users who are using it are much more likely to be able to simply add the required filter like you mentioned above.
—
I think there is a bit of confusion about how rule processing is handled. The rules were always applied to terms via archives and the main query. The only addition was processing Term_Query term loops, such as a tag cloud, or list of terms shown in a blog posts meta. The rules were simply adapted to not do anything if they have nothing in common with the given context.
For example rules like Is a Post or Is a page will simply bail early with
false
when the context is a Term_Query. Rules like Is Post with Term, or Is Term Archive were always there, just only applied to the main query itself.##
As for the Entire Site rule, that was added for only a couple use cases, but specifically where they do want the entire site locked down. IE I want my entire site protected, but I don’t want to have some convoluted set of rules like “Content is a page OR Content is a post OR content is a tag OR content is a …”.
Without that rule, protecting the entire site requires adding rules for every potential content type.With it you can add one rule.
If you were to want to protect everything except one type of content, then you could simply use a negative rule of “Content is not a Post”.
As for slowing execution, no that simple rule always returns true, ZERO processing required, thus much faster than … OR … OR … OR … which would process all of them effectively until it found a match.
Overall the rule engine here hasn’t changed the way it works since before CC v1 was released. This same rule engine powers our much larger plugin Popup Maker as well and has been refined and improved mainly.
The key differences now though are context aware rules, if there are any issues this is the only place it could come from given the rest of the functionality has been well tested and perfected over the course of nearly 10 years.
For example Posts with Tax should always return false for a Term_Query, as no Term is a Post.
But for the main query, main query posts, or Post_Query it should be checked consistently.
So my thoughts for your scenario (though I’m not quite sure what your setup should be), is that you likely just need help aligning the rules you set with the expectations of what content you want it to restrict.
Can you tell us exactly what you are trying to do, lots of what aboutisms here but we can chase those for a long time an never really find any issues to solve. If you say you need content x/y/z protected, we can guide you on exactly what rules and combinations of AND/OR/Groups to use.
—
I’m reviewing all of the term query rule context handling today, but I don’t foresee it leading to any bug fixes, more likely simple performance enhancements with bailing early where appropriate, and even that has minimal real world impact given we cache rule processing for the duration of the request (and long term via Object Caching in Content Control Pro).
Hope I didn’t miss anything.
Ok so after some more digging/testing, it all seems to be working as expected for those tests.
The one issue might be that since your using negative conditions, those would broadly match any content on that pages loop as well. IE if your home page shows a list of posts, and one of those posts is restricted, it could still trigger the redirect depending on your protection settings.
I am seeing a small discrepancy in tag archive handling, but that seems unrelated to your particular report.
I’m hesitant to make random changes without fully being able to deduce the root problem, but I’m happy to keep exploring this issue, especially if you have further guidance.
Sorry, reread that and noticed you said
Content: Content Is Not A Page with <taxonomy> = <value>
That said the negative conditions simply check for a false value instead of a true, there is no inherent difference in how they are processed at all.
Checking more on that now, but each context processes conditions differently, IE post query & main query checks rules differently than term queries.
@peter8nss – Given your home page matches restricted content profile, I would fully expect it to be restricted.
There is no reason it shouldn’t be given your setup example.
If you want it to not be restricted explicitly, you can add an AND Content Is Not Selected Page (Home Page).
Otherwise you would effectively be bypassing matched rules incorrectly.
—
As for the WP CLI, all requests by default are unauthenticated, as such restrictions should be applied (otherwise we would get “Security vulnerability reports” saying our plugin has vulnerabilities ??).
However you can pass
--user
with an ID I believe and force it to run as a proper authed user.—
Let me know if I missed something.
@peter8nss – Appreciate the details, you nailed it. Patched for the next release.
Fix was simply to rename $result -> $intent.@jbekkers – Appreciate the awesome feedback. Very happy we could solve that for you. v2.4.0 added a lot of performance improvements as well which we are very happy with.
- Your correct, priority does matter, mainly for performance of the handling. We stop processing after we find a match.