Commit Graph

31 Commits

Author SHA1 Message Date
Scott Lamb ee98bf5236 no more Docker!
* use `termion` rather than `ncurses` to limit runtime deps
* cross-compile with `cross` instead of our own dockerfiles/scripts
* update instructions
* update release procedure and GitHub actions to match
* prep changelog for `v0.7.8`

Fixes #160
Closes #265
2023-10-18 21:55:47 -07:00
Scott Lamb ddda01e4fa preparing v0.7.0 2021-10-26 18:54:26 -07:00
Scott Lamb 30cea5cfcb several documentation improvements
*   prefix docker/nvr commands with sudo (fixes #142).
    I was just going to link to the docker documentation on setting
    up non-root access, but that's kind of a personal preference.
    I included a `<details>` about it instead and made all the commands
    work with sudo.

*   take better advantage of github markdown's code block syntax
    highlighting. Use "console" for shell session stuff, put the
    "nvr" wrapper script in its own block with "bash".

*   add some comments to nvr wrapper script where people need to
    make changes and/or will be confused.

*   add a `<details>` that talks about shutting down and restarting
    the session around `nvr config` (see #151). Still not user-friendly
    but at least it's better documented now.
2021-08-23 12:44:48 -07:00
Scott Lamb 4d4d78ba64 mass markdown reformatting
Add tables of contents (using the VS Code Markdown All-In-One extension)
and reformat lists to consistently use 4-space indents. No content
changes.
2021-04-01 12:32:31 -07:00
Scott Lamb ff1615a0b4 address database upgrade slowness (#107)
* give a rule of thumb for update time in the documentation
* log the SQLite3 version, which can affect performance
* do the vacuum in non-WAL mode, to correctly set the page size and to
  avoid very slow behavior on older SQLite3 versions. Larger page sizes
  are generally faster (including subsequent vacuum operations).
  This won't help much for the first vacuum after this change, but it
  will help afterward.
* likewise, set the page size properly on "moonfire-nvr init".
2021-02-10 22:28:40 -08:00
Scott Lamb 31801e20c3 update docs to recommend new Docker setup
and remove the old scripts, to reduce the supported ways of doing
things.
2021-01-21 16:00:38 -08:00
Scott Lamb b34914c73e prepare to merge schema version 6 to master 2020-12-22 19:53:29 -08:00
Scott Lamb b078851ad7 remove half-baked object detection schema
I want to release schema version 6 now, with the fully developed pieces.
2020-12-22 19:50:51 -08:00
Scott Lamb cb97ccdfeb start splitting wall and media duration for #34
This splits the schema and playback path. The recording path still
adjusts the frame durations and always says the wall and media durations
are the same. I expect to change that in a following commit. I wouldn't
be surprised if that shakes out some bugs in this portion.
2020-08-04 21:44:01 -07:00
Scott Lamb f3ddbfe22a track cumulative duration and runs
This is useful for a combo scrub bar-based UI (#32) + live view UI (#59)
in a non-obvious way. When constructing a HTML Media Source Extensions
API SourceBuffer, the caller can specify a "mode" of either "segments"
or "sequence":

In "sequence" mode, playback assumes segments are added sequentially.
This is good enough for a live view-only UI (#59) but not for a scrub
bar UI in which you may want to seek backward to a segment you've never
seen before. You will then need to insert a segment out-of-sequence.
Imagine what happens when the user goes forward again until the end of
the segment inserted immediately before it. The user should see the
chronologically next segment or a pause for loading if it's unavailable.
The best approximation of this is to track the mapping of timestamps to
segments and insert a VTTCue with an enter/exit handler that seeks to
the right position. But seeking isn't instantaneous; the user will
likely briefly see first the segment they seeked to before. That's
janky. Additionally, the "canplaythrough" event will behave strangely.

In "segments" mode, playback respects the timestamps we set:

* The obvious choice is to use wall clock timestamps. This is fine if
  they're known to be fixed and correct. They're not. The
  currently-recording segment may be "unanchored", meaning its start
  timestamp is not yet fixed. Older timestamps may overlap if the system
  clock was stepped between runs. The latter isn't /too/ bad from a user
  perspective, though it's confusing as a developer. We probably will
  only end up showing the more recent recording for a given
  timestamp anyway. But the former is quite annoying. It means we have
  to throw away part of the SourceBuffer that we may want to seek back
  (causing UI pauses when that happens) or keep our own spare copy of it
  (memory bloat). I'd like to avoid the whole mess.

* Another approach is to use timestamps that are guaranteed to be in
  the correct order but that may have gaps. In particular, a timestamp
  of (recording_id * max_recording_duration) + time_within_recording.
  But again seeking isn't instantaneous. In my experiments, there's a
  visible pause between segments that drives me nuts.

* Finally, the approach that led me to this schema change. Use
  timestamps that place each segment after the one before, possibly with
  an intentional gap between runs (to force a wait where we have an
  actual gap). This should make the browser's natural playback behavior
  work properly: it never goes to an incorrect place, and it only waits
  when/if we want it to. We have to maintain a mapping between its
  timestamps and segment ids but that's doable.

This commit is only the schema change; the new data aren't exposed in
the API yet, much less used by a UI.

Note that stream.next_recording_id became stream.cum_recordings. I made
a slight definition change in the process: recording ids for new streams
start at 0 rather than 1. Various tests changed accordingly.

The upgrade process makes a best effort to backfill these new fields,
but of course it doesn't know the total duration or number of runs of
previously deleted rows. That's good enough.
2020-06-09 16:17:32 -07:00
Scott Lamb c6fe1b66a1 add a preliminary schema for object detection (#30) 2020-03-25 18:10:37 -07:00
Scott Lamb 00991733f2 use Blake3 instead of SHA-1 or Blake2b
Benefits:

* Blake3 is faster. This is most noticeable for the hashing of the
  sample file data.
* we no longer need OpenSSL, which helps with shrinking the binary size
  (#70). sha1 basically forced OpenSSL usage; ring deliberately doesn't
  support this old algorithm, and the pure-Rust sha1 crate is painfully
  slow. OpenSSL might still be a better choice than ring/rustls for TLS
  but it's nice to have the option.

For the video sample entries, I decided we don't need to hash at all. I
think the id number is sufficiently stable, and it's okay---perhaps even
desirable---if an existing init segment changes for fixes like e5b83c2.
2020-03-20 21:46:53 -07:00
Scott Lamb e5b83c21e1 schema version 6 with pixel aspect ratio
This makes anamorphic sub streams display correctly, even ones from old
Hikvision cameras that don't properly set the aspect ratio at the H.264
layer.
2020-03-19 21:40:59 -07:00
Scott Lamb 154d0c30c0 note caveat on signals schema 2019-12-28 08:00:38 -06:00
Scott Lamb d61b5e1bdd Use fixed-size directory meta files
Add a new schema version 5; now 4 means the directory meta may or may
not be upgraded.

Fixes #65: now it's possible to open the directory even if it lies on a
completely full disk.
2019-07-04 23:30:37 -05:00
Scott Lamb eaf6608597 tweak and document version 3->4 upgrade 2019-06-20 16:03:59 -07:00
Scott Lamb 1c9f2a4d83 initial schema for user authentication (#26)
This is only the database schema, which I'm adding now in the hopes of
freezing schema version 3. There's no way yet to create users, much less
actually authenticate.
2018-03-21 23:57:45 -07:00
Scott Lamb dfee66c84b support additional recording_integrity timestamps
These are not actually populated by the code yet. I'm trying to get the
v3 schema frozen as soon as possible; actually using the fields can come
later.

Add some explanation of their value in time.md, along with some general
musing on leap seconds, and a correction on the frequency error of my cameras.
2018-03-21 22:32:41 -07:00
Scott Lamb b037c9bdd7 knob to reduce db commits (SSD write cycles)
This improves the practicality of having many streams (including the doubling
of streams by having main + sub streams for each camera). With these tuned
properly, extra streams don't cause any extra write cycles in normal or error
cases. Consider the worst case in which each RTSP session immediately sends a
single frame and then fails. Moonfire retries every second, so this would
formerly cause one commit per second per stream. (flush_if_sec=0 preserves
this behavior.) Now the commits can be arbitrarily infrequent by setting
higher values of flush_if_sec.

WARNING: this isn't production-ready! I hacked up dir.rs to make tests pass
and "moonfire-nvr run" work in the best-case scenario, but it doesn't handle
errors gracefully. I've been debating what to do when writing a recording
fails. I considered "abandoning" the recording then either reusing or skipping
its id. (in the latter case, marking the file as garbage if it can't be
unlinked immediately). I think now there's no point in abandoning a recording.
If I can't write to that file, there's no reason to believe another will work
better. It's better to retry that recording forever, and perhaps put the whole
directory into an error state that stops recording until those writes go
through. I'm planning to redesign dir.rs to make this happen.
2018-02-22 16:35:34 -08:00
Scott Lamb 253f3de399 reorganize the sample file directory
The filenames now represent composite ids (stream id + recording id) rather
than a separate uuid system with its own reservation for a few benefits:

  * This provides more information when there are inconsistencies.

  * This avoids the need for managing the reservations during recording. I
    expect this to simplify delaying flushing of newly written sample files.
    Now the directory has to be scanned at startup for files that never got
    written to the database, but that's acceptably fast even with millions of
    files.

  * Less information to keep in memory and in the recording_playback table.

I'd considered using one directory per stream, which might help if the
filesystem has trouble coping with huge directories. But that would mean each
dir has to be fsync()ed separately (more latency and/or more multithreading).
So I'll stick with this until I see concrete evidence of a problem that would
solve.

Test coverage of the error conditions is poor. I plan to do some restructuring
of the db/dir code, hopefully making steps toward testability along the way.
2018-02-20 10:11:10 -08:00
Scott Lamb e7f5733f29 new database/sample file dir interlock scheme
The idea is to avoid the problems described in src/schema.proto; those
possibilities have bothered me for a while. A bonus is that (in a future
commit) it can replace the sample file uuid scheme in favor of using
<camera_uuid>-<stream_type>/<recording_id> for several advantages:

  * on data integrity problems (specifically, extra sample files), more
    information to use to understand what happened.
  * no more reserving sample files prior to using them. This avoids some extra
    database transactions on startup (now there's an extra two total rather
    than an extra one per stream). It also simplifies an upcoming change I
    want to make in which some streams are not flushed immediately, reducing
    the write load significantly (maybe one per minute total rather than one
    per stream per minute).
  * get rid of eight bytes per playback cache entry in RAM (and nine bytes
    per recording_playback row on flash).

The implementation is still pretty rough in places:

  * Lack of tests.
  * Poor ode organization. In particular, SampleFileDirectory::write_meta
    shouldn't be exposed beyond db. I'm thinking about moving db.rs and
    SampleFileDirectory to a new crate, moonfire_nvr_db. This would improve
    compile times as well.
  * No tooling for renaming a sample file directory.
  * Config subcommand still panics in conditions that can be reasonably
    expected to happen.
2018-02-14 23:35:52 -08:00
Scott Lamb 89b6bccaa3 support multiple sample file directories
This is still pretty basic support. There's no config UI support for
renaming/moving the sample file directories after they are created, and no
error checking that the files are still in the expected place. I can imagine
sysadmins getting into trouble trying to change things. I hope to address at
least some of that in a follow-up change to introduce a versioning/locking
scheme that ensures databases and sample file dirs match in some way.

A bonus change that kinda got pulled along for the ride: a dialog pops up in
the config UI while a stream is being tested. The experience was pretty bad
before; there was no indication the button worked at all until it was done,
sometimes many seconds later.
2018-02-11 23:04:02 -08:00
Scott Lamb 6f309e432f store rfc6381_codec in the database
This avoids having codec-specific logic to synthesize it in db.rs. It's not
too much of a problem now with only H.264 support, but it'd be a pain when
supporting H.265 and other codecs.
2018-02-05 11:57:59 -08:00
Scott Lamb cc6579b211 upgrader for v1->v2 2018-02-03 22:19:02 -08:00
Scott Lamb d192d98129 more fixes to the upgrade guide 2018-01-30 21:14:53 -08:00
Scott Lamb 9a8ed69047 fix more formatting in upgrade guide 2018-01-30 17:01:44 -08:00
Scott Lamb 7566b6a38b fix several formatting problems in upgrade guide 2018-01-30 17:01:03 -08:00
Scott Lamb bfc0e2abe8 use my own logging package
This supports formats that I find more useful; one that mimicks the Google
glog package, and one that is similar but adapted for the systemd journal.
2017-03-26 00:01:48 -07:00
Scott Lamb e1cb5f4204 small improvements to schema upgrade instructions 2017-01-01 22:58:27 -08:00
Scott Lamb eee887b9a6 schema version 1
The advantages of the new schema are:

* overlapping recordings can be unambiguously described and viewed.
  This is a significant problem right now; the clock on my cameras appears to
  run faster than the (NTP-synchronized) clock on my NVR. Thus, if an
  RTSP session drops and is quickly reconnected, there's likely to be
  overlap.

* less I/O is required to view mp4s when there are multiple cameras.
  This is a pretty dramatic difference in the number of database read
  syscalls with pragma page_size = 1024 (605 -> 39 in one test),
  although I'm not sure how much of that maps to actual I/O wait time.
  That's probably as dramatic as it is due to overflow page chaining.
  But even with larger page sizes, there's an improvement. It helps to
  stop interleaving the video_index fields from different cameras.

There are changes to the JSON API to take advantage of this, described
in design/api.md.

There's an upgrade procedure, described in guide/schema.md.
2016-12-20 22:08:18 -08:00
Scott Lamb 86dd36d7a5 version the sqlite3 database schema
See guide/schema.md for instructions on upgrading past this commit.
2016-12-20 15:44:04 -08:00