Home
Tags:

CRUD

Data Storage

Input as JSON -

  • start & end the doc with curlies {}
  • keys & values - keys need quotes
  • separate keys & vals with colons - {"thisKey": "thisVal"}
  • separate key-value pairs with commas - {"keyOne": "valOne", "keyTwo": "valTwo"}
  • keys can nest other key/value pairs, "sub-documents"
PROs of JSON CONs of JSON
User Friendly - easy to understand Text-based: Slow parsing
Readable: easy to ready Space-Consuming & Inefficient at storing info
Familiar: used on the frontend and json api traffic Supports only a few data types

MongoDB addressed these with BSON: Binary JSON

  • Binary rep of json
  • fast
  • flexible
  • less space
  • highly performant
  • handles dates & binary data
JSON BSON
Encoding UTF-8 Binary
Data Support Strings, Boolean, Numbers, Arrays Strings, Boolean, Numbers (integers, Long, Floats, more), Arrays, Dates, Raw Binary
Readability Humans + Machines Machines

MongoDB keeps data as bson over the network - interesting!

Data Importing & Exporting

Data can be exported & imported in json and bson.

Function JSON BSON
Import mongoimport mongorestore
Export mongoexport mongodump
/*

  Export The Data

*/
mongodump --uri "mongodb+srv://<username>:<pw>@<cluster-string>.mongodb.net/db_name_here"

mongoexport --uri="mongodb+srv://<username>:<pw>@<cluster-string>.mongodb.net/db_name_here" --collection=sales --out=sales.json



/*

  Import The Data

*/
mongorestore --uri "mongodb+srv://<username>:<pw>@<cluster-string>.mongodb.net/db_name_here" --drop dump

mongoimport --uri="mongodb+srv://<username>:<pw>@<cluster-string>.mongodb.net/db_name_here" --drop sales.json
  • uri (uniform resource identifier) uses an srv string used to connect to the mongo instance
  • the --drop flag on both import commands will prevent errors when importing data by dropping the existing db
  • both import commands can also take the --collection=<collection_name> flag

On Reading

Interact with data through the mongo shell.
The mongo shell allows interaction with the db without a gui.
The mongo shell is a js interpreter

  • find is the first thing to use to show data in a mongo db
    • db.collectionName.find()
  • pretty is a command that can be tacked on to the end to "clean up" the output of the find command
    • db.collectionName.find().pretty()
  • it iterates through results - the find command will show 20 (i think) results by default && typing it then pressing RETURN with iterate through a db cursor

On Creating

This can be done through atlas

  • find the Insert Document button, currently located in the "top right" of the collection data explorer in the atlas gui
  • the _id field has to exist on new docs
    • it also must be unique: all other fields in a doc can be EXACTLY THE SAME, but the _id field must be unique
    • the field is auto-generated by mongo when not provided - epic - instead of using the auto-gen, using an app-specific unique id could be nice, as the field is also auto-indexed!! Using the mongo shell to insert docs:
  • mongoimport can be used to import MANY docs
    • when the _id field in the new data is duplicate of existing data, add the --drop option to REMOVE THE WHOLE COLLECTION
/*
  using the `insert` command
*/
// on success
WriteResult({ nInserted: 1 })

/*
  Inserting Many
*/

db.coll.insert([{ docOne: "val" }, { docTwo: "val" }, { docThree: "val" }])
// returns something like...
BulkWriteResult({
  writeErrors: [],
  writeConcernErrors: [],
  nInserted: 3,
  nUpserted: 0,
  nMatched: 0,
  nModified: 0,
  nRemoved: 0,
  upserted: [],
})

/*
  Forcing a duplicate key err on inserting many
*/
db.coll.insert([
  {
    _id: 123,
    water: "melonOne",
  },
  {
    _id: 123,
    water: "melonTwo",
  },
])

// returns...
BulkWriteResult({
  writeErrors: [
    {
      index: 1,
      code: 11000,
      errmsg: "E11000 duplicate key error collection: ...etc",
      op: {
        _id: 123,
        water: melonTwo,
      },
    },
  ],
  writeConcernErrors: [],
  nInserted: 1,
  nUpserted: 0,
  nMatched: 0,
  nModified: 0,
  nRemoved: 0,
  upserted: [],
})

/*
  NOTICE 
  - the problem doc noted
  - inserted in order they are listed
  - insert STOPS at a failed doc
*/

// with this ORDERED key, another approach will happen
// when set to true,
// the insert CONTINUES after a failed doc
db.asd.insert(
  [
    { _id: 1, test: "oneTwo" },
    { _id: 1, test: "twoThree" },
    { _id: 5, test: "worksWell" },
  ],
  { ordered: false }
)

On Updating

  • updateOne
    • updates a single doc
  • updateMany
    • updates MANY docs that match a selection query
  • NOTES
    • when updating a field that does not exist, the filed gets implicitly added to the doc: things like $push and $set and $inc will atuo-create fields
// db.coll.updateOne({selection_criteria}, {update_val})

db.sdf.updateMany(
  {
    water: {
      $regex: "^m",
    },
  },
  {
    $set: {
      sink: "kitchen",
    },
  }
)

On Deleting

Docs can be deleted with

  • deleteOne
  • deleteMany

The Only time deleteOne is a good approach is when deleting by the _id field: this is the ONLY field we can be 100% sure that the doc is, indeed, unique.

Collections can be dropped with drop() like db.coll.drop()

Delete all docs in a collection can be done with db.coll.deleteMany({})

Array Examples

For some more details on CURD and ararys, see this other post