ext4 format
Old disk
Current disk with ext3 (all units are powers of 1024, not 1000):
Filesystem Size Used Avail Use% Mounted on /dev/sda1 273G 251G 7.9G 97% /
debugfs stats says:
Filesystem features: has_journal ext_attr filetype needs_recovery sparse_super large_file Inode count: 36257792 Block count: 72505353 Reserved block count: 3625267 Free blocks: 5554423 Free inodes: 28537176 Inodes per group: 16384 Inode size: 128
I.e. there's an inode for about every 7.89 kB of disk. I suppose that was a ratio of 1 inode per 8kB. The inodes themselves are using 4.3GB. Then there's 246.7GB of file+dir data, which is an average of 1 inode per 34kB of stored data.
New disk
When I run mkfs.ext4 on a new 2TB disk, the default filesystem is like this:
Filesystem Size Used Avail Use% Mounted on /dev/sdb1 1.9T 28G 1.7T 2% /mnt Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Inode count: 121577472 Block count: 486281216 Reserved block count: 24314060 Free blocks: 478600693 Free inodes: 121577461 Inodes per group: 8192 Inode size: 256
The inodes are bigger, maybe for the subsecond timestamp precision (and higher-performance xattrs which I don't use), and apparently mkfs has picked 1 inode per 16kB. This leads to 28GB of overhead for inodes, which seems rough. Also it has arranged for a 92GB buffer space for superuser.
With these flags, I get a more sensible layout: mkfs.ext4 -v -i 32768 -m 1 /dev/sdb1
Filesystem Size Used Avail Use% Mounted on /dev/sdb1 1.9T 14G 1.8T 1% /mnt Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Inode count: 60788736 Block count: 486281216 Reserved block count: 4862812 Free blocks: 482399989 Free inodes: 60788725 Inodes per group: 4096 Inode size: 256