Methodology

How VueLeaf turns forum discussion into evidence teams can defend

Coverage scope, sentiment handling, attribution logic, benchmarking rules, and known limitations. Use this page alongside the sample alert to inspect how the methodology produces a real output.

How a forum thread becomes evidence

01
Collection
Captured from a defined forum and subreddit set with source context preserved.
02
Context
Grower language interpreted in-thread, not by keyword alone.
03
Attribution
Brand, topic, and likely driver assigned with reviewable rationale.
04
Evidence frame
Evaluated within a defined source set, brand set, and time window.
05
Triage
Structured output routed with evidence, attribution, and the next recommended step.
Defined forum set/Explicit comparison frame/Reviewable source thread/Limitations stated/March 2026·Sample alert·Forums directory·Security

Three questions evaluators raise before a demo or budget decision

01

Where is VueLeaf looking?

A defined set of grower forums and cannabis subreddits. Not review sites, marketplaces, or the open web.

Source scope
02

How are signals interpreted?

Contextual classification and direct-mention attribution, not keyword polarity or even-spread scoring.

Context classification
03

What do the numbers mean?

Scoped benchmarks tied to a known brand set, forum set, and time range. Not market-wide estimates.

Benchmark rules

A defined source map, not total-market coverage

VueLeaf monitors grower forums, specialist cultivation communities, and selected cannabis subreddits where growers compare products, troubleshoot failures, document outcomes, and influence purchasing decisions. These include ICMag, THCFarmer, Rollitup, 420 Magazine, Autoflower Network, Overgrow, and subreddits such as r/microgrowery, r/autoflowers, and r/cannabiscultivation.

VueLeaf does not monitor review sites, marketplaces, general social media, or the open web. Every summary view, alert, and benchmark belongs to a specific forum set and time window. If the source set or date range changes, the outputs change with them.

Collection preserves the thread context needed for analysis and audit, so teams can move from a rollup metric to the source thread that produced it. For a broader source narrative, see the forum monitoring overview.

Contextual classification, not keyword polarity

Cannabis community language produces false reads with generic sentiment models. VueLeaf interprets each mention inside its local discussion context before a label is assigned.

Thread-level context reviewIllustrative
ICMagSeed reviews / Recent thread

Been running this breeder for two seasons. First round was solid, but the latest batch is rough. 3 out of 5 seeds popped, and two of those showed intersex traits by week 4. Meanwhile the Chem crosses from last year were absolute fire, so I know the genetics can perform. Something changed in QC or sourcing.

Negative signal
"3 out of 5 seeds popped"
Generic NLP reads this as a neutral numeric statement
Grower context: germination failure rate. Reputation-damaging for a seed bank.
Positive signal
"absolute fire"
Generic NLP reads "fire" as danger or negative
Grower context: strong endorsement of prior batch quality.
Thread-level read
Overall: negative shift for this breeder
The positive reference is about a past batch. The complaint is about the current one. The thread-level signal is a quality decline.

Sentiment labels are tied to the driver and the thread, not just the polarity direction. When a team reviews a shift, the surrounding thread evidence is accessible so they can inspect the cause. For a longer-form reporting example, see the 2026 forum sentiment report.

Which brand gets credit for the shift

Grower discussions frequently name multiple brands in the same thread. Attribution narrows from broad to specific through four weighting steps.

Attribution decision logicIllustrative
01
Direct mention weighted over ambient
Explicit claims about a brand receive attribution. Passing references do not split credit evenly.
02
Thread context shapes signal type
A complaint in a troubleshooting thread is weighted differently from a mention in a gear comparison.
03
Forum and time window preserved
A cluster on a specialist forum carries more weight than a single mention on a general subreddit.
04
Ambiguity stays visible
Low-confidence attributions are flagged, not hidden. The evidence trail is available for team review.

Attribution is built to help teams decide what to review next. If the assignment is ambiguous, the page keeps that ambiguity visible so teams can inspect the source thread and decide.

What the numbers include - and what they do not

Benchmark outputs are only useful when the denominator is explicit.

Share of voice definitionIllustrative
SOV = tracked mentions for brand
÷ total tracked mentions for brand set
within defined forum set
and time range
Change the frame and the score changes
Volume and credibility are not the same signal
Forum-level breakdowns stay visible
Same brand, three forums, different reality
Reddit
High
Visibility amplified by upvote mechanics and screenshot sharing.
Rollitup
Moderate
Recommendation threads drive long-tail search visibility.
THCFarmer
Low
Expert growers build the evidence that other communities cite.

The same brand leads on Reddit and barely registers where expert trust is built. A blended SOV hides this. Forum-level breakdowns expose it. For the full workflow, see the share-of-voice feature.

What VueLeaf does not claim to do

The methodology supports real decisions, which means the limits need to be explicit.

Forum-native, not total-market

VueLeaf reflects what happens inside monitored grower communities. It does not claim to represent the entire cannabis market, all customer channels, or all online discussion.

Coverage composition shapes interpretation

Some communities are technical and specialist. Others are broader and faster-moving. The composition of the source map affects what a buyer should infer from the output.

Benchmarks require explicit comparator sets

Share-of-voice outputs only make sense when the brand sets are known and stable for the decision window being reviewed.

Source review remains part of the workflow

VueLeaf accelerates judgment, not replaces it. Teams should inspect source threads before treating a high-stakes signal as settled fact.

This page explains how VueLeaf defines scope, interprets community discussion, and frames benchmark outputs. It does not replace the security page, which covers infrastructure, access controls, encryption, and data handling. For rollout, proof, and plan packaging, continue to pricing.

Evaluator questions

Can a team inspect the source thread behind a signal?

Yes. The sample alert shows what that evidence path looks like in practice.

How should a buyer use this page?

Read it before the first demo. Open the sample alert to compare the methodology with a real output. Forward this page to procurement as the trust reference.

Does VueLeaf claim to measure the whole cannabis market?

No. The output is strongest when teams want community-native evidence rather than a market-wide estimate.