Skip to main content
added 230 characters in body
Source Link
Philipp
  • 123.2k
  • 28
  • 264
  • 345

Use a markup language with attribute-value pairs like XML or JSON.

The parser can just ignore any attributes it doesn't understand or use defaults for any it doesn't find, which makes backward- and forward compatibility quite easy. Also, the format is human-readable so you can easily edit it with a text editor.

When you use an established language like XML or JSON you will also notice that many scripting languages support it, so when you still need to write a script to edit a large number of files, you will find it much easier to do.

The drawback of most of these languages is that they are quite verbose. That means the resulting files are much larger than they would need to be with an optimized binary format. Nowadays, file size doesn't matter too much in most situations. But in those where it does matter, the filesize can often be significantly reduced by compressing the file with a stock algorithm like zip.

Markup languages often don't allow random access unless the whole document is read from the hard drive and parsed. But in practice this doesn't matter that much, because hard drives are fastest with sequential reads. Randomly seeking multiple times to different parts of the same file can often be much slower than just reading the file in one go, even when it means that you read more data than you need to.

Use a markup language with attribute-value pairs like XML or JSON.

The parser can just ignore any attributes it doesn't understand or use defaults for any it doesn't find, which makes backward- and forward compatibility quite easy. Also, the format is human-readable so you can easily edit it with a text editor.

The drawback of most of these languages is that they are quite verbose. That means the resulting files are much larger than they would need to be with an optimized binary format. Nowadays, file size doesn't matter too much in most situations. But in those where it does matter, the filesize can often be significantly reduced by compressing the file with a stock algorithm like zip.

Markup languages often don't allow random access unless the whole document is read from the hard drive and parsed. But in practice this doesn't matter that much, because hard drives are fastest with sequential reads. Randomly seeking multiple times to different parts of the same file can often be much slower than just reading the file in one go, even when it means that you read more data than you need to.

Use a markup language with attribute-value pairs like XML or JSON.

The parser can just ignore any attributes it doesn't understand or use defaults for any it doesn't find, which makes backward- and forward compatibility quite easy. Also, the format is human-readable so you can easily edit it with a text editor.

When you use an established language like XML or JSON you will also notice that many scripting languages support it, so when you still need to write a script to edit a large number of files, you will find it much easier to do.

The drawback of most of these languages is that they are quite verbose. That means the resulting files are much larger than they would need to be with an optimized binary format. Nowadays, file size doesn't matter too much in most situations. But in those where it does matter, the filesize can often be significantly reduced by compressing the file with a stock algorithm like zip.

Markup languages often don't allow random access unless the whole document is read from the hard drive and parsed. But in practice this doesn't matter that much, because hard drives are fastest with sequential reads. Randomly seeking multiple times to different parts of the same file can often be much slower than just reading the file in one go, even when it means that you read more data than you need to.

added 4 characters in body
Source Link
Philipp
  • 123.2k
  • 28
  • 264
  • 345

Use a markup language with attribute-value pairs like XML or JSON.

The parser can just ignore any valuesattributes it doesn't understand or use defaults for any it doesn't find, which makes backward- and forward compatibility quite easy. Also, the format is human-readable so you can easily edit it with a text editor.

The drawback of most of these languages is that they are quite verbose. That means the resulting files are much larger than they would need to be with an optimized binary format. Nowadays, file size doesn't matter too much in most situations. But in those where it does matter, the filesize can often be significantly reduced by compressing the file with a stock algorithm like zip.

Markup languages often don't allow random access unless the whole document is read from the hard drive and parsed. But in practice this doesn't matter that much, because hard drives are fastest with sequential reads. Randomly seeking multiple times to different parts of the same file can often be much slower than just reading the file in one go, even when it means that you read more data than you need to.

Use a markup language with attribute-value pairs like XML or JSON.

The parser can just ignore any values it doesn't understand or use defaults for any it doesn't find, which makes backward- and forward compatibility quite easy. Also, the format is human-readable so you can easily edit it with a text editor.

The drawback of most of these languages is that they are quite verbose. That means the resulting files are much larger than they would need to be with an optimized binary format. Nowadays, file size doesn't matter too much in most situations. But in those where it does matter, the filesize can often be significantly reduced by compressing the file with a stock algorithm like zip.

Markup languages often don't allow random access unless the whole document is read from the hard drive and parsed. But in practice this doesn't matter that much, because hard drives are fastest with sequential reads. Randomly seeking multiple times to different parts of the same file can often be much slower than just reading the file in one go, even when it means that you read more data than you need to.

Use a markup language with attribute-value pairs like XML or JSON.

The parser can just ignore any attributes it doesn't understand or use defaults for any it doesn't find, which makes backward- and forward compatibility quite easy. Also, the format is human-readable so you can easily edit it with a text editor.

The drawback of most of these languages is that they are quite verbose. That means the resulting files are much larger than they would need to be with an optimized binary format. Nowadays, file size doesn't matter too much in most situations. But in those where it does matter, the filesize can often be significantly reduced by compressing the file with a stock algorithm like zip.

Markup languages often don't allow random access unless the whole document is read from the hard drive and parsed. But in practice this doesn't matter that much, because hard drives are fastest with sequential reads. Randomly seeking multiple times to different parts of the same file can often be much slower than just reading the file in one go, even when it means that you read more data than you need to.

Source Link
Philipp
  • 123.2k
  • 28
  • 264
  • 345

Use a markup language with attribute-value pairs like XML or JSON.

The parser can just ignore any values it doesn't understand or use defaults for any it doesn't find, which makes backward- and forward compatibility quite easy. Also, the format is human-readable so you can easily edit it with a text editor.

The drawback of most of these languages is that they are quite verbose. That means the resulting files are much larger than they would need to be with an optimized binary format. Nowadays, file size doesn't matter too much in most situations. But in those where it does matter, the filesize can often be significantly reduced by compressing the file with a stock algorithm like zip.

Markup languages often don't allow random access unless the whole document is read from the hard drive and parsed. But in practice this doesn't matter that much, because hard drives are fastest with sequential reads. Randomly seeking multiple times to different parts of the same file can often be much slower than just reading the file in one go, even when it means that you read more data than you need to.