CNL Wiki

Docs: Unpacking and spike sorting quick steps

Updated on January 30, 2026

Half as a reminder to myself, here are the steps to go through unpacking and automated spike sorting.

0) Before you unpack, you will need to download our unpacking tools (https://github.com/FriedCNL/nwbPipeline) to a local computer that has access to both Hoffman and the long-term storage (LTS) server. That way you can open the data from the LTS and save the unpacked files on Hoffman for analysis. I also keep a copy on my personal Hoffman drive so I can run batch spike sorting using Hoffman (see step 3).

1) run either run_unpackNeuralynx.m or run_unpackBlackRock.m which live in

/nwbPipeline-main/scripts/

You will need to fill in patient ID, exp ID, the ExpName, Add Path to the folder containing the NLX (or BRK) raw files (usually in a folder like this /Volumes/data/NLData/D591/EXP5_Goldmine/2026-01-28_15-31-33), and the link to the montage file (typically made by whoever ran Screening, which is almost always our first task).

2) Login to Hoffman and edit

~/autoSpikeSort/nwbPipeline/batch/runbatch_job.sh

where you need to fill in the expName (e.g. “GoldmineGems”…check the Hoffman…/data/PIPELINE_vc/ANALYSIS/ folder for the experiment name), patientId, and expIds.

3) Check if your automated sorting ran correctly. Sometimes Hoffman jobs die and it won’t create a times file, so this uses the filenames to make sure one exists for each. You’ll need to have Hoffman mounted to the Fried data folder (see https://friedcnl.ucla.edu/docs/mounting-hoffman/) and run ~/autoSpikeSort/nwbPipeline/batch/checkTimesFiles.py.
If a few are missing it can also be because too few spikes were detected for it to save a times file. This can be easily checked by looking at the file sizes in Finder. Any *_spikes.mat files ≤50 KB are likely to not have spikes, so don’t worry if those didn’t create times files.