Applixure User Ideas

Our customers made these suggestions for improving Applixure, add yours too!

Improve Clarity and Accuracy of Scoring & Dashboard Weighting

I’ve noticed some inconsistencies in how scores are presented across Applixure, which can make it difficult to fully trust them as reliable indicators. For example, in our environment, the overall device score is shown as 3.4, yet no individual device scores below 3.6, and most are above 4.5. This is just one case, but it highlights a broader issue: scores across the platform often feel disconnected from the underlying data.

I understand that Applixure’s official guidance is to look past the scores and focus on the specific issues themselves. However, I’d argue that scores are hugely valuable as a quick reference—especially for larger environments or for users who are newer to the platform. They offer a high-level view of where attention is needed and can drive faster, more confident decision-making.

Suggestions:

  1. Clarify how global scores are calculated – Help users understand how aggregate scores are derived from individual components to improve trust in the metrics.

  2. Apply transparent weighting – Consider introducing a default weighting model that reflects Applixure’s own sense of issue criticality, but also allows customisation by users.

  3. Prioritise by score impact – Let the dashboard surface the issues that most significantly affect scores (across devices, software, and other modules), and dynamically reorder them as fixes are applied.

  4. Make scores actionable – Ensure scores directly support prioritisation by clearly linking them to the specific factors contributing to degradation.

In my experience, people are naturally drawn to scores. They provide a fast, intuitive route into the platform’s deeper insights—if those scores feel accurate, consistent, and actionable.

Would love to see the scoring model evolve to reflect this.

  • Nigel Hoar
  • Apr 25 2025
  • Attach files