|
1 | 1 | ---
|
2 |
| -title: "Response Target Benchmarks" |
3 |
| -path: "/programs/response-target-benchmarks.html" |
4 |
| -id: "programs/response-target-benchmarks" |
| 2 | +title: "Explore" |
| 3 | +path: "/programs/explore.html" |
| 4 | +id: "programs/explore" |
5 | 5 | ---
|
6 | 6 |
|
7 |
| -[Response Target](response-targets.html) Benchmarks enable you to compare your response times to those of other programs. This will help you see what areas you need to focus on to improve your program. You can specify the kinds of programs you want to compare your program to by selecting different program requirements and creating benchmarks. |
| 7 | +<style> |
| 8 | +li ul { |
| 9 | + list-style-type: circle; |
| 10 | +} |
| 11 | +</style> |
| 12 | + |
| 13 | +The Explore dashboard enables you to compare your [response times](response-targets.html), submissions, and spend data to those of other programs. This enables you to take a deep dive into your data to see what areas you need to focus on to improve your program. You can also create benchmarks and specify the kinds of programs you want to compare your program to based on program type, industry, and company headcount. |
8 | 14 |
|
9 | 15 | > **Note:** This feature is currently only available for Enterprise programs.
|
10 | 16 |
|
11 | 17 |
|
12 | 18 | To access your response targets dashboard:
|
13 |
| -1. Go to **Program Dashboard > Response Targets**. |
| 19 | +1. Go to **Program Dashboard > Explore**. |
14 | 20 | 2. Select the appropriate variables for these fields:
|
15 | 21 |
|
16 | 22 | Field | Details
|
17 | 23 | ----- | -------
|
18 |
| -Metric | What you want to measure. <br><br>You can choose from:<br><li>Median [time to resolution](response-target-metrics.html) <li>Median [time to triage](response-target-metrics.html) <li>Median [time to bounty](response-target-metrics.html) <li>Median [time to first response](response-target-metrics.html) <li>Missed resolution targets <li>Missed triage targets <li>Missed bounty targets <li>Missed first response targets <br><br>**Note:** *Median* is the midpoint of your dataset. We're using the median rather than the average as it prevents your data from being skewed by one-off instances.<br><br>**Note:** *Missed* is calculated by the number of reports that were longer than the targets you set for yourself. |
| 24 | +Metric | What you want to measure. <br><br>You can choose from: <br><li>[Response Targets](response-target-metrics.html) metrics:<ul><li>Median [time to resolution](response-target-metrics.html)<li>Median [time to triage](response-target-metrics.html)<li>Median [time to bounty](response-target-metrics.html)<li>Median [time to first response](response-target-metrics.html)<li>Missed resolution targets<li>Missed triage targets <li>Missed bounty targets <li>Missed first response targets</ul> <li>[Submissions metrics](report-states.html)<ul><li>Needs more info <li>Resolved<li>Closed as Duplicate<li>Closed as N/A<li>Closed as Informative<li>Closed as Spam<li>Closed<li>Re-opened<li>Triaged<li>Submitted</ul> <li>Spend metrics:<ul><li>Number of Bounties Paid<li>Sum of Bounties Paid</ul><br>**Note:** *Median* is the midpoint of your dataset. We're using the median rather than the average as it prevents your data from being skewed by one-off instances.<br><br>*Missed* is calculated by the number of reports that were longer than the targets you set for yourself. |
19 | 25 | Severity | The rating of how severe a vulnerability is.<br><br>You can choose from:<br><li>All severities <li>No severity <li>Critical <li>High <li>Medium <li>Low <li>None
|
20 | 26 | Start date | The date you want to start measuring from.
|
21 | 27 | End date | The date you want to stop measuring from.
|
22 |
| -View by | How you want to view your data. <br><br>You can choose from: <br><li>Week <li>Month <li>Quarter <li>Year |
23 |
| -Filter | You can filter your metrics by custom fields, assets and/or weakness. |
| 28 | +View by | How you want to view your data. <br><br>You can choose from:<li>Submission date: The date the report was submitted (*displays for all Median time metrics and Submission activity*)<li>Resolution date: The date the report was resolved (*displays when Median time to resolution is the metric*) |
| 29 | +Interval | The time between each data point.<br><br>You can choose from:<li>Week <li>Month <li>Quarter <li>Year |
| 30 | +Filter | You can filter your metrics by: <li>Severities<li>Assets<li>Weaknesses<li>Current report state<li>Disclosed state<li>Engaged by your H1 triage team <li>Custom fields |
24 | 31 | Benchmarks | A filter you can create of different program characteristics that you want to compare your program to. <br><br>Each benchmark is an aggregate measure of data of other programs on HackerOne. All data is anonymous so that no program's data will be exposed.
|
25 | 32 |
|
26 |
| -You'll be able to view how your response target compares to the metric that you choose to measure with. |
| 33 | +Once you’ve specified how you’d like to view your data, you’ll be able to drill-down into your data to see how your program has been performing. You’ll get a holistic view of what is and isn’t going well, and be able to discern what areas need improvement. |
| 34 | + |
| 35 | +You can also see the list of reports that apply to the metric you’re viewing. |
| 36 | + |
| 37 | +### Use Cases |
| 38 | +Here are some example use cases as to why you might want to use Explore. |
| 39 | + |
| 40 | +**Use Case 1: Analyze your data** |
| 41 | +<br>You want to understand how well your team is doing in terms of resolution speed so that you can patch vulnerabilities as quickly as possible, but you don’t have a clear understanding of how your program has been doing over time or what could improve. |
| 42 | + |
| 43 | +Using Explore, you can view key metrics such as Response times, Submissions, and Spend. From the metrics, you can drill down into specific data spikes and compare yourself to previous time periods. |
| 44 | + |
| 45 | +**Use Case 2: Compare yourself to other programs** |
| 46 | +<br>You want to understand how well your company is doing in terms of response speed, but you don’t have a clear understanding of what’s considered a “good” speed. Looking at internal data isn’t enough because you have no idea if the company’s security team is doing well compared to the industry at large. |
| 47 | + |
| 48 | +By choosing **Benchmarks** in Explore, you can gain insights to your program performance against aggregated market data against other companies based on program type, industry and company headcount. |
0 commit comments