-
Notifications
You must be signed in to change notification settings - Fork 734
feat: add ossf data fetcher (CM-952) #3839
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
ulemons
merged 13 commits into
feat/add-dal-automatic-project-discovery
from
feat/add-assf-data-fetcher
Mar 26, 2026
Merged
Changes from all commits
Commits
Show all changes
13 commits
Select commit
Hold shift + click to select a range
0ee8ef1
fix: push lock file
ulemons 66019ea
feat: schedule structure
ulemons 99e6320
fix: lint
ulemons e2c144f
fix: lint
ulemons f0123d6
fix: format
ulemons ad1e4cf
fix: lint
ulemons 41c7a11
fix: update cron expression
ulemons 151e228
feat: mode incremental
ulemons 6a79d0e
fix: update readme
ulemons 30313f8
fix: add new source
ulemons 08f2d04
fix: add dependencies
ulemons cc9b4dc
refactor: fix eslint
ulemons a8062f5
fix: stream destroy
ulemons File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Oops, something went wrong.
73 changes: 73 additions & 0 deletions
73
services/apps/automatic_projects_discovery_worker/README.md
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,73 @@ | ||
| # Automatic Projects Discovery Worker | ||
|
|
||
| Temporal worker that discovers open-source projects from external data sources and writes them to the `projectCatalog` table. | ||
|
|
||
| ## Architecture | ||
|
|
||
| ### Source abstraction | ||
|
|
||
| Every data source implements the `IDiscoverySource` interface (`src/sources/types.ts`): | ||
|
|
||
| | Method | Purpose | | ||
| | ----------------------------- | --------------------------------------------------------------------------- | | ||
| | `listAvailableDatasets()` | Returns available dataset snapshots, sorted newest-first | | ||
| | `fetchDatasetStream(dataset)` | Returns a readable stream for the dataset (e.g. HTTP response) | | ||
| | `parseRow(rawRow)` | Converts a raw CSV/JSON row into a `IDiscoverySourceRow`, or `null` to skip | | ||
|
|
||
| Sources are registered in `src/sources/registry.ts` as a simple name → factory map. | ||
|
|
||
| **To add a new source:** create a class implementing `IDiscoverySource`, then add one line to the registry. | ||
|
|
||
| ### Current sources | ||
|
|
||
| | Name | Folder | Description | | ||
| | ------------------------ | ------------------------------------- | ------------------------------------------------------------------------------------ | | ||
| | `ossf-criticality-score` | `src/sources/ossf-criticality-score/` | OSSF Criticality Score snapshots from a public GCS bucket (~750K repos per snapshot) | | ||
|
|
||
| ### Workflow | ||
|
|
||
| ``` | ||
| discoverProjects({ mode: 'incremental' | 'full' }) | ||
| │ | ||
| ├─ Activity: listDatasets(sourceName) | ||
| │ → returns dataset descriptors sorted newest-first | ||
| │ | ||
| ├─ Selection: incremental → latest only, full → all datasets | ||
| │ | ||
| └─ For each dataset: | ||
| └─ Activity: processDataset(sourceName, dataset) | ||
| → HTTP stream → csv-parse → batches of 5000 → bulkUpsertProjectCatalog | ||
| ``` | ||
|
|
||
| ### Timeouts | ||
|
|
||
| | Activity | startToCloseTimeout | retries | | ||
| | ------------------ | ------------------- | ------- | | ||
| | `listDatasets` | 2 min | 3 | | ||
| | `processDataset` | 30 min | 3 | | ||
| | Workflow execution | 2 hours | 3 | | ||
|
|
||
| ### Schedule | ||
|
|
||
| Runs daily at midnight via Temporal cron (`0 0 * * *`). | ||
|
|
||
| ## File structure | ||
|
|
||
| ``` | ||
| src/ | ||
| ├── main.ts # Service bootstrap (postgres enabled) | ||
| ├── activities.ts # Barrel re-export | ||
| ├── workflows.ts # Barrel re-export | ||
| ├── activities/ | ||
| │ └── activities.ts # listDatasets, processDataset | ||
| ├── workflows/ | ||
| │ └── discoverProjects.ts # Orchestration with mode selection | ||
| ├── schedules/ | ||
| │ └── scheduleProjectsDiscovery.ts # Temporal cron schedule | ||
| └── sources/ | ||
| ├── types.ts # IDiscoverySource, IDatasetDescriptor | ||
| ├── registry.ts # Source factory map | ||
| └── ossf-criticality-score/ | ||
| ├── source.ts # IDiscoverySource implementation | ||
| └── bucketClient.ts # GCS public bucket HTTP client | ||
| ``` | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
4 changes: 3 additions & 1 deletion
4
services/apps/automatic_projects_discovery_worker/src/activities.ts
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1 +1,3 @@ | ||
| export * from './activities/activities' | ||
| import { listDatasets, listSources, processDataset } from './activities/activities' | ||
|
|
||
| export { listDatasets, listSources, processDataset } |
121 changes: 119 additions & 2 deletions
121
services/apps/automatic_projects_discovery_worker/src/activities/activities.ts
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,7 +1,124 @@ | ||
| import { parse } from 'csv-parse' | ||
|
|
||
| import { bulkUpsertProjectCatalog } from '@crowd/data-access-layer' | ||
| import { IDbProjectCatalogCreate } from '@crowd/data-access-layer/src/project-catalog/types' | ||
| import { pgpQx } from '@crowd/data-access-layer/src/queryExecutor' | ||
| import { getServiceLogger } from '@crowd/logging' | ||
|
|
||
| import { svc } from '../main' | ||
| import { getAvailableSourceNames, getSource } from '../sources/registry' | ||
| import { IDatasetDescriptor } from '../sources/types' | ||
|
|
||
| const log = getServiceLogger() | ||
|
|
||
| export async function logDiscoveryRun(): Promise<void> { | ||
| log.info('Automatic projects discovery workflow executed successfully.') | ||
| const BATCH_SIZE = 5000 | ||
|
|
||
| export async function listSources(): Promise<string[]> { | ||
| return getAvailableSourceNames() | ||
| } | ||
|
|
||
| export async function listDatasets(sourceName: string): Promise<IDatasetDescriptor[]> { | ||
| const source = getSource(sourceName) | ||
| const datasets = await source.listAvailableDatasets() | ||
|
|
||
| log.info({ sourceName, count: datasets.length, newest: datasets[0]?.id }, 'Datasets listed.') | ||
|
|
||
| return datasets | ||
| } | ||
|
|
||
| export async function processDataset( | ||
| sourceName: string, | ||
| dataset: IDatasetDescriptor, | ||
| ): Promise<void> { | ||
| const qx = pgpQx(svc.postgres.writer.connection()) | ||
| const startTime = Date.now() | ||
|
|
||
| log.info({ sourceName, datasetId: dataset.id, url: dataset.url }, 'Processing dataset...') | ||
|
|
||
| const source = getSource(sourceName) | ||
| const stream = await source.fetchDatasetStream(dataset) | ||
|
|
||
| // For CSV sources: pipe through csv-parse to get Record<string, string> objects. | ||
| // For JSON sources: the stream already emits pre-parsed objects in object mode. | ||
| const records = | ||
| source.format === 'json' | ||
| ? stream | ||
| : stream.pipe( | ||
| parse({ | ||
| columns: true, | ||
| skip_empty_lines: true, | ||
| trim: true, | ||
| }), | ||
| ) | ||
|
|
||
| // pipe() does not forward source errors to the destination automatically, so we | ||
| // destroy records explicitly — this surfaces the error in the for-await loop and | ||
| // lets Temporal mark the activity as failed and retry it. | ||
| stream.on('error', (err: Error) => { | ||
| log.error({ datasetId: dataset.id, error: err.message }, 'Stream error.') | ||
| records.destroy(err) | ||
| }) | ||
|
|
||
| if (source.format !== 'json') { | ||
| const csvRecords = records as ReturnType<typeof parse> | ||
| csvRecords.on('error', (err) => { | ||
| log.error({ datasetId: dataset.id, error: err.message }, 'CSV parser error.') | ||
| }) | ||
| } | ||
|
|
||
| let batch: IDbProjectCatalogCreate[] = [] | ||
| let totalProcessed = 0 | ||
| let totalSkipped = 0 | ||
| let batchNumber = 0 | ||
| let totalRows = 0 | ||
|
|
||
| for await (const rawRow of records) { | ||
| totalRows++ | ||
|
|
||
| const parsed = source.parseRow(rawRow as Record<string, unknown>) | ||
| if (!parsed) { | ||
| totalSkipped++ | ||
| continue | ||
| } | ||
|
|
||
| batch.push({ | ||
| projectSlug: parsed.projectSlug, | ||
| repoName: parsed.repoName, | ||
| repoUrl: parsed.repoUrl, | ||
| ossfCriticalityScore: parsed.ossfCriticalityScore, | ||
| lfCriticalityScore: parsed.lfCriticalityScore, | ||
| }) | ||
|
|
||
| if (batch.length >= BATCH_SIZE) { | ||
| batchNumber++ | ||
|
|
||
| await bulkUpsertProjectCatalog(qx, batch) | ||
| totalProcessed += batch.length | ||
| batch = [] | ||
|
|
||
| log.info({ totalProcessed, batchNumber, datasetId: dataset.id }, 'Batch upserted.') | ||
| } | ||
| } | ||
|
|
||
| // Flush remaining rows that didn't fill a complete batch | ||
| if (batch.length > 0) { | ||
| batchNumber++ | ||
| await bulkUpsertProjectCatalog(qx, batch) | ||
| totalProcessed += batch.length | ||
| } | ||
|
|
||
| const elapsedSeconds = ((Date.now() - startTime) / 1000).toFixed(1) | ||
|
|
||
| log.info( | ||
| { | ||
| sourceName, | ||
| datasetId: dataset.id, | ||
| totalRows, | ||
| totalProcessed, | ||
| totalSkipped, | ||
| totalBatches: batchNumber, | ||
| elapsedSeconds, | ||
| }, | ||
| 'Dataset processing complete.', | ||
| ) | ||
| } |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.