• The Gutenberg editor plugin was installed along with the Classic editor plugin as a fall-back precaution.
    Assessments and tests were done and evaluation was made, with over 8 factors being considered.

    The ratings below are a rounded off value based on a weighted average of ratings for the factors ;

    version 3.8.0 R = 2 (2.1) Comments – see below

    version 3.7.0 R = 2 (2.0) Minor visual improvements, common tools not at hand, “H”eading icon is a “T”, some tools at bottom of screen could be in top bar or toolbar, large number of bugs fixed.

    version 3.6.2 R = 2 (1.9) Minor Visual improvements
    version 3.6.1 R = 2 (1.8) Minor Visual improvements
    version 3.6 R = 2 (1.8)
    version 3.5 R = 2 (1.5)
    version 3.4 R = 1 (1.3)

    (note – if it is possible to update the star rating, then this rating will reflect the most recent rating of the most recent Gutenberg version tested and assessed)

    Keep up the good work on this mammoth project.

    —————————————————————————-
    Some thoughts for the process of ‘Beta Testing’

    a) As well as listing new features, and fixed bugs, in the details of the latest release, it is handy to list some of the significant newly discovered bugs, and pre-existing significant bugs that are yet to be fixed. This helps the would be beta tester make decisions about the risk and consequences that they are prepared to commit their testing efforts to.

    b) If there are certain conditions that are known to cause significant problems, then it would be helpful to have the plugin test for those conditions, and notify the user, ‘before’ they start using the plugin for production purposes.
    This would be particularly useful for checking conditions that lead to hanging, or white screens, or unable to save or update. The plugin could be run the test using dummy data, either during install, or after activation, but in any case gets done prior to the user starting to edit their own pages.
    The test could use a timer, that de-activates the plugin if the test is not completed satisfactorily in time, and the user is notified of the failed test, and maybe given some tips as to how to make adjustments to their system if they wish to continue.
    A recent example of how problems for some testers could have been easily avoided, is the 3.7 GB version’s initial problem with Yoast. The problem existed, and there was information that Yoast would have an update in a few days that would solve the problem. But that information was not prominent, and was not placed before the user’s eyes when updating to the 3.7 version where this problem was (temporarily) introduced. If the user was made aware, or if the plugin could self detect the condition (of Yoast being installed), then the tester would have been saved any related trouble (which could have caused lost time and anxiety), and been able to do the update a few days later – trouble free.

    c) Beta testing is good time to try out alternatives. This can be the way things are done, functions, user interaction. This means that if the tester is provided with options (if they choose) then they can report back which options worked best for them, or rate each option. This does not mean that these options will survive through to the production version, but it would help to find which of the methods is most likely to be optimal.

    The comments above are not intended to give the impression that I have experienced any or all of these difficulties. I have read some of the reviews, and these thoughts may help improve the beta testing experience for a proportion of those testers who are presently not having the best of experiences.
    —————————————————————
    Comments on version 3.8
    1) The ‘commonly used blocks’ tool
    a) This is located at the bottom of the editor page, but would be more useful if it floated at the bottom of the window, when viewing higher up a page (than at the bottom). This means that it will be available where ever the user is looking at, which is typically where the next block insertion point will be.
    b) This needs to have a ‘tool bar’ type appearance, to help it be easily identified as a tool bar, rather several disjoint items that may (or may not) bear some relation to the insertion of blocks. This would help speed up the learning curve on this feature.
    c) This tool could also be dragable, and during beta testing the user could be offered an option on how they would like to use this or where they would like it placed. This could become quite a useful feature.

    2) Button Style
    a) it would be handy to be able to use rectangular buttons, or rectangular buttons with smaller radius corners.
    b) It would be handy to be able to control the ‘hover over’ colors and ‘depressed’ colors, for the background and fonts, of the button.

    3) Font color picker
    a) is difficult to exit, as it did not display an OK, Done, or Exit button.
    b) The default light and dark colors, do not have many combinations that work well together. It is good that the warning about bad visibility comes up. A better choice of light and dark colors, would mean that most combinations would work.
    A good indication of the usability of this feature would be to draw up a table of light and dark colors, and highlight the good bad and mediocre combinations. The combinations might also include two light colors, or two dark colors. If the table is mostly full of bad combinations, then the table (of suggested default colors) is bad, so find a better table of default colors.

    • This topic was modified 6 years, 7 months ago by Andrew Nevins.
    • This topic was modified 6 years, 7 months ago by MKSnMKS.
    • This topic was modified 6 years, 6 months ago by MKSnMKS.
    • This topic was modified 6 years, 6 months ago by MKSnMKS.
Viewing 5 replies - 1 through 5 (of 5 total)
  • Hi MKSnMKS,

    What are the 8 factors/criteria you are using? They would be a good benchmark. And, have you assessed the current editor based on the same factors?

    Thread Starter MKSnMKS

    (@mksnmks)

    Hello irishetcher,

    I used more than 8.
    I regarded them as factors, not criteria;
    By ‘factors’ I meant ‘things that are considered’.
    I am not sure if you have a specific meaning for ‘criteria’, but that usually carries an implication of meeting some standard, or datum.

    The factors could be used as a bench-marking system, though the value assigned to each depends on the user who is making it(and we were invited for our feedback). So unless there is some actual form of measurement it’s all rather qualitative rather than quantitative.
    But the difference between one version and the next, is probably more quantified (though is still really just qualitative), assuming the same user and the same users evaluation approach.

    re your mention of criteria;
    If some measures could be made in a quantitative sense, then it could form a sort of ‘performance parameter’, which anybody could then choose to use with their own set criteria for each.

    So my system is chosen not so much to reflect some sort of actual real value, but to be able to track the ‘change’ from version to version, and the ‘rate of change’. Some versions may be quantum leaps, others only minor improvements, and some may be a backward step (in the opinion of the user making the assessment). But this would all add up to a progression in time.

    You may have seen some suggestions I’ve made, for users to discuss what they think of them, to help come up with something that is what testers/users may think is suitable – in the case where many testers/users seem to think there is something which is not the best. After there is some sort of consensus about what a number of them would like, then somebody might choose to do something with any of them. One already got placed on github, about feedback built directly in to the editor (for easy use during beta testing), but it may have been a bit premature, especially as not many others commented on it, nor was there any decision about what the feedback would include. But anyway, that’s just an example.

    These are 8 of the considerations the assessment was based on ;
    1) Initial impression (first 30 seconds to 2 minutes)
    2) UI (appearance, layout, beauty, consistency, obviousness)
    3) Intuitiveness (ease of becoming proficient just by noticing the obvious)
    4) Usability (smoothness of workflow, speed, simple ways to do complex tasks,etc)
    5) Production effectiveness (workflow – impediments and speed features)
    6) Functions (capability,accessibility)
    7) Reliability, robustness (perceived bugginess, and confidence)
    8) Potential, promise & target design goals

    Note
    the weighted average, can vary in the importance of each factor as the project moves to completion or impending target date. To reflect that although some factor may be assessed for some value, it may or may not be of the same relative importance through out the project.

    Thank you for your interest.

    Thanks MK for going to the trouble of explaining all that. It makes sense to me now in terms of the factors you listed. You are taking a very precise approach to this going by the methodology used whereas I am looking at specific pain points related to workflows that I use for specific use cases and which I have been using for a long time with the current editor. Currently these aren’t achievable with the way Gutenberg currently functions.

    It wouldn’t take much to bring some of the functionality back into the picture and maintain the concept of working with blocks, which I do see as being very useful. In addition I feel there are some missed opportunities that would add extra value and that should have been added to the current editor years ago. I have already posted these ideas elsewhere.

    Thread Starter MKSnMKS

    (@mksnmks)

    Hello irishetcher,

    I have tried to use a general approach which may be as inclusive as possible of the various aspects that various users (and developers) may be interested in.

    re your comment;

    … whereas I am looking at specific pain points related to workflows that I use for specific use cases and which I have been using for a long time with the current editor.

    With your objective, then you would probably be interested in assessments related to ;

    4) Usability (smoothness of workflow, speed, simple ways to do complex tasks,etc)
    5) Production effectiveness (workflow – impediments and speed features)
    6) Functions (capability,accessibility)

    What I regard as an important first step in assessment of usability is, for any task done ;
    i) number of mouse moves, mouse clicks,menu selection, mouse motions (such as moving from one side of the screen to the other, or up down, or scroll, or long drags).
    ii) tools that are obviously at hand conveniently; As soon as something needs to be explained then it is not obvious. As soon as the explanation needs to be obtained from outside of the task, outside of the environment, out on the internet, then the help is at hand. Tools which are available, but are largely unknown and not revealed, are not at hand. Tools which are very useful, but are located clicks and scrolls away, and not able to be relocated, are also not at hand. (note there are always shortcomings in developing a package, and it is always that for a package to be used, it must first of all be able to work, so the way in which the user interacts with the package and visa versa comes as a result of the functions that are able to be performed. Convenience for the user usually follows later, though some other development processes allow for concurrent and largely separate development of the UI and central engine)

    Your comments that follow from the one above, probably have a similarity to the other factors above number 8 that I have also used to make assessments.

    Re your comment;

    I have already posted these ideas elsewhere.

    May I ask where?

    Thanks

    • This reply was modified 6 years, 7 months ago by MKSnMKS. Reason: added a few more points
    Andrew Nevins

    (@anevins)

    WCLDN 2018 Contributor | Volunteer support

    Hi @mksnmks, I am very sorry to derail your review in this way, but I couldn’t find you on Slack.

    We really appreciate your attention to detail and effort to explain things in a clear manner. We have had to moderate your responses in a recent review that shall not be named because the review was being derailed. Each response in a review will trigger the Original Poster to be emailed with a snippet of that response. That’s great if the responses answer the Original Poster’s concerns. In that review, we were responding to other users of the review that didn’t help the Original Poster. By responding to those people we were derailing the thread.

    I hope that helps explain why. I needed to tell you because you’re not wrong in the verbiage of your responses.

    • This reply was modified 6 years, 7 months ago by Andrew Nevins.
Viewing 5 replies - 1 through 5 (of 5 total)
  • The topic ‘Gutenberg futuristic editor & pagebuilder’ is closed to new replies.