(=°ω°)丿
python3 是否有方法可以读取并修改例如原版末地船的 nbt 或者 command storage 等文件?

阴阳师元素祭祀
有啊有啊
一个编程语言怎么可能读写不了文件呢
按照字节读自己解析即可了
https://wiki.biligame.com/mc/NBT%E6%A0%BC%E5%BC%8F


代码思路可以参考
https://www.mcbbs.net/thread-1014198-272076-1.html
这边帮忙折叠了只解析了int和{}的nbt代码


代码:

  1. private static String getYYS(int depth) {
  2.   StringBuilder sb = new StringBuilder();
  3.   for (int i = 0; i < depth; ++i) {
  4.    sb.append("    ");
  5.   }
  6.   return sb.toString();
  7.     }
  8.     private static void read(DataInputStream in, int depth) throws IOException {
  9.   boolean next = true;
  10.   while (next) {
  11.    byte tag = in.readByte();
  12.    next = tag != 0;
  13.    if (next) {
  14.     short nameLength = in.readShort();
  15.     if (nameLength != 0) {
  16.   byte[] name = new byte[nameLength];
  17.   if (in.read(name) != name.length) {
  18.    throw new IOException("ljyys for name error");
  19.   }
  20.   String tagName = new String(name);
  21.   System.out.println(getYYS(depth) + tagName + " {");
  22.     } else {
  23.   System.out.println(getYYS(depth) + "{");
  24.     }
  25.     switch (tag) {
  26.   case 0x3: {
  27.    System.out.println(getYYS(depth + 1) + in.readInt());
  28.    System.out.println(getYYS(depth) + "}");
  29.    break;
  30.   }
  31.   case 0xA: {
  32.    read(in, depth + 1);
  33.    next = in.available() > 0;
  34.    break;
  35.   }
  36.   default: {
  37.    System.out.println("data left: " + in.available());
  38.    throw new IOException("ljyys for tag:" + tag);
  39.   }
  40.     }
  41.    }
  42.   }
  43.   System.out.println(getYYS(depth) + "}");
  44.     }
  45.     public static void main(String[] args) throws Throwable {
  46.   DataInputStream in = new DataInputStream(Files.newInputStream(Paths.get("command_storage_minecraft")));
  47.   read(in, 0);
  48.     }



我相信 简单 的 python 代码 会更少


py:
open(...)
read(...)
我相信楼主肯定会python 不至于基础代码还需要教



(=°ω°)丿
阴阳师元素祭祀 发表于 2020-4-11 21:52
有啊有啊
一个编程语言怎么可能读写不了文件呢
https://wiki.biligame.com/mc/NBT%E6%A0%BC%E5%BC%8F

我说的是 python 而你给的是 java(

阴阳师元素祭祀
(=°ω°)丿 发表于 2020-4-11 21:53
我说的是 python 而你给的是 java(

根据隔壁群的情况 我猜测
你需要
@箱子 的好东西


[搬运+翻译][从零学编程]Python3Ⅳ:异常 &amp; 文件
https://www.mcbbs.net/thread-990257-1-1.html
(出处: Minecraft(我的世界)中文论坛)


你 该不会想要完整代码吧...   走莉走莉

阴阳师元素祭祀

因为不会下载
所以..请原谅我这样发文件
https://github.com/twoolie/NBT


是重要的协议:
Copyright (c) 2010-2013 Thomas Woolford and contributors


Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the &quot;Software&quot;), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:


The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.


THE SOFTWARE IS PROVIDED &quot;AS IS&quot;, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

nbt/__init__.py


代码:


  1. __all__ = ["nbt", "world", "region", "chunk"]
  2. from . import *

  3. # Documentation only automatically includes functions specified in __all__.
  4. # If you add more functions, please manually include them in doc/index.rst.

  5. VERSION = (1, 5, 0)
  6. """NBT version as tuple. Note that the major and minor revision number are
  7. always present, but the patch identifier (the 3rd number) is only used in 1.4."""

  8. def _get_version():
  9.     """Return the NBT version as string."""
  10.     return ".".join([str(v) for v in VERSION])



nbt/chunk.py


代码:


  1. """
  2. Handles a single chunk of data (16x16x128 blocks) from a Minecraft save.
  3. For more information about the chunck format:
  4. https://minecraft.gamepedia.com/Chunk_format
  5. """

  6. from io import BytesIO
  7. from struct import pack
  8. import array
  9. import nbt


  10. # Legacy numeric block identifiers
  11. # mapped to alpha identifiers in best effort
  12. # See https://minecraft.gamepedia.com/Java_Edition_data_values/Pre-flattening
  13. # TODO: move this map into a separate file

  14. block_ids = {
  15.    0: 'air',
  16.    1: 'stone',
  17.    2: 'grass_block',
  18.    3: 'dirt',
  19.    4: 'cobblestone',
  20.    5: 'oak_planks',
  21.    6: 'sapling',
  22.    7: 'bedrock',
  23.    8: 'flowing_water',
  24.    9: 'water',
  25.   10: 'flowing_lava',
  26.   11: 'lava',
  27.   12: 'sand',
  28.   13: 'gravel',
  29.   14: 'gold_ore',
  30.   15: 'iron_ore',
  31.   16: 'coal_ore',
  32.   17: 'oak_log',
  33.   18: 'oak_leaves',
  34.   19: 'sponge',
  35.   20: 'glass',
  36.   21: 'lapis_ore',
  37.   24: 'sandstone',
  38.   30: 'cobweb',
  39.   31: 'grass',
  40.   32: 'dead_bush',
  41.   35: 'white_wool',
  42.   37: 'dandelion',
  43.   38: 'poppy',
  44.   39: 'brown_mushroom',
  45.   40: 'red_mushroom',
  46.   43: 'stone_slab',
  47.   44: 'stone_slab',
  48.   47: 'bookshelf',
  49.   48: 'mossy_cobblestone',
  50.   49: 'obsidian',
  51.   50: 'torch',
  52.   51: 'fire',
  53.   52: 'spawner',
  54.   53: 'oak_stairs',
  55.   54: 'chest',
  56.   56: 'diamond_ore',
  57.   58: 'crafting_table',
  58.   59: 'wheat',
  59.   60: 'farmland',
  60.   61: 'furnace',
  61.   62: 'furnace',
  62.   63: 'sign',# will change to oak_sign in 1.14
  63.   64: 'oak_door',
  64.   65: 'ladder',
  65.   66: 'rail',
  66.   67: 'cobblestone_stairs',
  67.   72: 'oak_pressure_plate',
  68.   73: 'redstone_ore',
  69.   74: 'redstone_ore',
  70.   78: 'snow',
  71.   79: 'ice',
  72.   81: 'cactus',
  73.   82: 'clay',
  74.   83: 'sugar_cane',
  75.   85: 'oak_fence',
  76.   86: 'pumpkin',
  77.   91: 'lit_pumpkin',
  78.     101: 'iron_bars',
  79.     102: 'glass_pane',
  80.     }


  81. def block_id_to_name(bid):
  82.     try:
  83.   name = block_ids[bid]
  84.     except KeyError:
  85.   name = 'unknown_%d' % (bid,)
  86.   print("warning: unknown block id %i" % bid)
  87.   print("hint: add that block to the 'block_ids' map")
  88.     return name


  89. # Generic Chunk

  90. class Chunk(object):
  91.     """Class for representing a single chunk."""
  92.     def __init__(self, nbt):
  93.   self.chunk_data = nbt['Level']
  94.   self.coords = self.chunk_data['xPos'],self.chunk_data['zPos']

  95.     def get_coords(self):
  96.   """Return the coordinates of this chunk."""
  97.   return (self.coords[0].value,self.coords[1].value)

  98.     def __repr__(self):
  99.   """Return a representation of this Chunk."""
  100.   return "Chunk("+str(self.coords[0])+","+str(self.coords[1])+")"


  101. # Chunk in Region old format

  102. class McRegionChunk(Chunk):

  103.     def __init__(self, nbt):
  104.   Chunk.__init__(self, nbt)
  105.   self.blocks = BlockArray(self.chunk_data['Blocks'].value, self.chunk_data['Data'].value)

  106.     def get_max_height(self):
  107.   return 127

  108.     def get_block(self, x, y, z):
  109.   name = block_id_to_name(self.blocks.get_block(x, y, z))
  110.   return name

  111.     def iter_block(self):
  112.   for y in range(0, 128):
  113.    for z in range(0, 16):
  114.     for x in range(0, 16):
  115.   yield self.get_block(x, y, z)


  116. # Section in Anvil new format

  117. class AnvilSection(object):

  118.     def __init__(self, nbt, version):
  119.   self.names = []
  120.   self.indexes = []

  121.   # Is the section flattened ?
  122.   # See https://minecraft.gamepedia.com/1.13/Flattening

  123.   if version == 0 or version == 1343:# 1343 = MC 1.12.2
  124.    self._init_array(nbt)
  125.   elif version == 1631:# MC 1.13
  126.    self._init_index(nbt)
  127.   else:
  128.    raise NotImplementedError()

  129.   # Section contains 4096 blocks whatever data version

  130.   assert len(self.indexes) == 4096


  131.     # Decode legacy section
  132.     # Contains an array of block numeric identifiers

  133.     def _init_array(self, nbt):
  134.   bids = []
  135.   for bid in nbt['Blocks'].value:
  136.    try:
  137.     i = bids.index(bid)
  138.    except ValueError:
  139.     bids.append(bid)
  140.     i = len(bids) - 1
  141.    self.indexes.append(i)

  142.   for bid in bids:
  143.    bname = block_id_to_name(bid)
  144.    self.names.append(bname)


  145.     # Decode modern section
  146.     # Contains palette of block names and indexes

  147.     def _init_index(self, nbt):

  148.   for p in nbt['Palette']:
  149.    name = p['Name'].value
  150.    if name.startswith('minecraft:'):
  151.     name = name[10:]
  152.    self.names.append(name)

  153.   states = nbt['BlockStates'].value

  154.   # Block states are packed into an array of longs
  155.   # with variable number of bits per block (min: 4)

  156.   nb = (len(self.names) - 1).bit_length()
  157.   if nb < 4: nb = 4
  158.   assert nb == len(states) * 8 * 8 / 4096
  159.   m = pow(2, nb) - 1

  160.   j = 0
  161.   bl = 64
  162.   ll = states[0]

  163.   for i in range(0,4096):
  164.    if bl == 0:
  165.     j = j + 1
  166.     ll = states[j]
  167.     bl = 64

  168.    if nb <= bl:
  169.     self.indexes.append(ll & m)
  170.     ll = ll >> nb
  171.     bl = bl - nb
  172.    else:
  173.     j = j + 1
  174.     lh = states[j]
  175.     bh = nb - bl

  176.     lh = (lh & (pow(2, bh) - 1)) << bl
  177.     ll = (ll & (pow(2, bl) - 1))
  178.     self.indexes.append(lh | ll)

  179.     ll = states[j]
  180.     ll = ll >> bh
  181.     bl = 64 - bh


  182.     def get_block(self, x, y, z):
  183.   # Blocks are stored in YZX order
  184.   i = y * 256 + z * 16 + x
  185.   p = self.indexes
  186.   return self.names[p]


  187.     def iter_block(self):
  188.   for i in range(0, 4096):
  189.    p = self.indexes
  190.    yield self.names[p]


  191. # Chunck in Anvil new format

  192. class AnvilChunk(Chunk):

  193.     def __init__(self, nbt):
  194.   Chunk.__init__(self, nbt)

  195.   # Started to work on this class with MC version 1.13.2
  196.   # so with the chunk data version 1631
  197.   # Backported to first Anvil version (= 0) from examples
  198.   # Could work with other versions, but has to be tested first

  199.   try:
  200.    version = nbt['DataVersion'].value
  201.    if version != 1343 and version != 1631:
  202.     raise NotImplementedError('DataVersion %d not implemented' % (version,))
  203.   except KeyError:
  204.    version = 0

  205.   # Load all sections

  206.   self.sections = {}
  207.   if 'Sections' in self.chunk_data:
  208.    for s in self.chunk_data['Sections']:
  209.     self.sections[s['Y'].value] = AnvilSection(s, version)


  210.     def get_section(self, y):
  211.   """Get a section from Y index."""
  212.   if y in self.sections:
  213.    return self.sections[y]

  214.   return None


  215.     def get_max_height(self):
  216.   ymax = 0
  217.   for y in self.sections.keys():
  218.    if y > ymax: ymax = y
  219.   return ymax * 16 + 15


  220.     def get_block(self, x, y, z):
  221.   """Get a block from relative x,y,z."""
  222.   sy,by = divmod(y, 16)
  223.   section = self.get_section(sy)
  224.   if section == None:
  225.    return None

  226.   return section.get_block(x, by, z)


  227.     def iter_block(self):
  228.   for s in self.sections.values():
  229.    for b in s.iter_block():
  230.     yield b


  231. class BlockArray(object):
  232.     """Convenience class for dealing with a Block/data byte array."""
  233.     def __init__(self, blocksBytes=None, dataBytes=None):
  234.   """Create a new BlockArray, defaulting to no block or data bytes."""
  235.   if isinstance(blocksBytes, (bytearray, array.array)):
  236.    self.blocksList = list(blocksBytes)
  237.   else:
  238.    self.blocksList = [0]*32768 # Create an empty block list (32768 entries of zero (air))

  239.   if isinstance(dataBytes, (bytearray, array.array)):
  240.    self.dataList = list(dataBytes)
  241.   else:
  242.    self.dataList = [0]*16384 # Create an empty data list (32768 4-bit entries of zero make 16384 byte entries)

  243.     def get_blocks_struct(self):
  244.   """Return a dictionary with block ids keyed to (x, y, z)."""
  245.   cur_x = 0
  246.   cur_y = 0
  247.   cur_z = 0
  248.   blocks = {}
  249.   for block_id in self.blocksList:
  250.    blocks[(cur_x,cur_y,cur_z)] = block_id
  251.    cur_y += 1
  252.    if (cur_y > 127):
  253.     cur_y = 0
  254.     cur_z += 1
  255.     if (cur_z > 15):
  256.   cur_z = 0
  257.   cur_x += 1
  258.   return blocks

  259.     # Give blockList back as a byte array
  260.     def get_blocks_byte_array(self, buffer=False):
  261.   """Return a list of all blocks in this chunk."""
  262.   if buffer:
  263.    length = len(self.blocksList)
  264.    return BytesIO(pack(">i", length)+self.get_blocks_byte_array())
  265.   else:
  266.    return array.array('B', self.blocksList).tostring()

  267.     def get_data_byte_array(self, buffer=False):
  268.   """Return a list of data for all blocks in this chunk."""
  269.   if buffer:
  270.    length = len(self.dataList)
  271.    return BytesIO(pack(">i", length)+self.get_data_byte_array())
  272.   else:
  273.    return array.array('B', self.dataList).tostring()

  274.     def generate_heightmap(self, buffer=False, as_array=False):
  275.   """Return a heightmap, representing the highest solid blocks in this chunk."""
  276.   non_solids = [0, 8, 9, 10, 11, 38, 37, 32, 31]
  277.   if buffer:
  278.    return BytesIO(pack(">i", 256)+self.generate_heightmap()) # Length + Heightmap, ready for insertion into Chunk NBT
  279.   else:
  280.    bytes = []
  281.    for z in range(16):
  282.     for x in range(16):
  283.   for y in range(127, -1, -1):
  284.    offset = y + z*128 + x*128*16
  285.    if (self.blocksList[offset] not in non_solids or y == 0):
  286.     bytes.append(y+1)
  287.     break
  288.    if (as_array):
  289.     return bytes
  290.    else:
  291.     return array.array('B', bytes).tostring()

  292.     def set_blocks(self, list=None, dict=None, fill_air=False):
  293.   """
  294.   Sets all blocks in this chunk, using either a list or dictionary.
  295.   Blocks not explicitly set can be filled to air by setting fill_air to True.
  296.   """
  297.   if list:
  298.    # Inputting a list like self.blocksList
  299.    self.blocksList = list
  300.   elif dict:
  301.    # Inputting a dictionary like result of self.get_blocks_struct()
  302.    list = []
  303.    for x in range(16):
  304.     for z in range(16):
  305.   for y in range(128):
  306.    coord = x,y,z
  307.    offset = y + z*128 + x*128*16
  308.    if (coord in dict):
  309.     list.append(dict[coord])
  310.    else:
  311.     if (self.blocksList[offset] and not fill_air):
  312.   list.append(self.blocksList[offset])
  313.     else:
  314.   list.append(0) # Air
  315.    self.blocksList = list
  316.   else:
  317.    # None of the above...
  318.    return False
  319.   return True

  320.     def set_block(self, x,y,z, id, data=0):
  321.   """Sets the block a x, y, z to the specified id, and optionally data."""
  322.   offset = y + z*128 + x*128*16
  323.   self.blocksList[offset] = id
  324.   if (offset % 2 == 1):
  325.    # offset is odd
  326.    index = (offset-1)//2
  327.    b = self.dataList[index]
  328.    self.dataList[index] = (b & 240) + (data & 15) # modify lower bits, leaving higher bits in place
  329.   else:
  330.    # offset is even
  331.    index = offset//2
  332.    b = self.dataList[index]
  333.    self.dataList[index] = (b & 15) + (data << 4 & 240) # modify ligher bits, leaving lower bits in place

  334.     # Get a given X,Y,Z or a tuple of three coordinates
  335.     def get_block(self, x,y,z, coord=False):
  336.   """Return the id of the block at x, y, z."""
  337.   """
  338.   Laid out like:
  339.   (0,0,0), (0,1,0), (0,2,0) ... (0,127,0), (0,0,1), (0,1,1), (0,2,1) ... (0,127,1), (0,0,2) ... (0,127,15), (1,0,0), (1,1,0) ... (15,127,15)
  340.  
  341.   ::
  342.  
  343.     blocks = []
  344.     for x in range(15):
  345.    for z in range(15):
  346.   for y in range(127):
  347.     blocks.append(Block(x,y,z))
  348.   """

  349.   offset = y + z*128 + x*128*16 if (coord == False) else coord[1] + coord[2]*128 + coord[0]*128*16
  350.   return self.blocksList[offset]



nbt/nbt.py


代码:


  1. """
  2. Handle the NBT (Named Binary Tag) data format
  3. For more information about the NBT format:
  4. https://minecraft.gamepedia.com/NBT_format
  5. """

  6. from struct import Struct, error as StructError
  7. from gzip import GzipFile
  8. from collections import MutableMapping, MutableSequence, Sequence
  9. import sys

  10. _PY3 = sys.version_info >= (3,)
  11. if _PY3:
  12.     unicode = str
  13.     basestring = str
  14. else:
  15.     range = xrange

  16. TAG_END = 0
  17. TAG_BYTE = 1
  18. TAG_SHORT = 2
  19. TAG_INT = 3
  20. TAG_LONG = 4
  21. TAG_FLOAT = 5
  22. TAG_DOUBLE = 6
  23. TAG_BYTE_ARRAY = 7
  24. TAG_STRING = 8
  25. TAG_LIST = 9
  26. TAG_COMPOUND = 10
  27. TAG_INT_ARRAY = 11
  28. TAG_LONG_ARRAY = 12


  29. class MalformedFileError(Exception):
  30.     """Exception raised on parse error."""
  31.     pass


  32. class TAG(object):
  33.     """TAG, a variable with an intrinsic name."""
  34.     id = None

  35.     def __init__(self, value=None, name=None):
  36.   self.name = name
  37.   self.value = value

  38.     # Parsers and Generators
  39.     def _parse_buffer(self, buffer):
  40.   raise NotImplementedError(self.__class__.__name__)

  41.     def _render_buffer(self, buffer):
  42.   raise NotImplementedError(self.__class__.__name__)

  43.     # Printing and Formatting of tree
  44.     def tag_info(self):
  45.   """Return Unicode string with class, name and unnested value."""
  46.   return self.__class__.__name__ + (
  47.    '(%r)' % self.name if self.name
  48.    else "") + ": " + self.valuestr()

  49.     def valuestr(self):
  50.   """Return Unicode string of unnested value. For iterators, this
  51.   returns a summary."""
  52.   return unicode(self.value)

  53.     def pretty_tree(self, indent=0):
  54.   """Return formated Unicode string of self, where iterable items are
  55.   recursively listed in detail."""
  56.   return ("\t" * indent) + self.tag_info()

  57.     # Python 2 compatibility; Python 3 uses __str__ instead.
  58.     def __unicode__(self):
  59.   """Return a unicode string with the result in human readable format.
  60.   Unlike valuestr(), the result is recursive for iterators till at least
  61.   one level deep."""
  62.   return unicode(self.value)

  63.     def __str__(self):
  64.   """Return a string (ascii formated for Python 2, unicode for Python 3)
  65.   with the result in human readable format. Unlike valuestr(), the result
  66.    is recursive for iterators till at least one level deep."""
  67.   return str(self.value)

  68.     # Unlike regular iterators, __repr__() is not recursive.
  69.     # Use pretty_tree for recursive results.
  70.     # iterators should use __repr__ or tag_info for each item, like
  71.     #regular iterators
  72.     def __repr__(self):
  73.   """Return a string (ascii formated for Python 2, unicode for Python 3)
  74.   describing the class, name and id for debugging purposes."""
  75.   return "<%s(%r) at 0x%x>" % (
  76.    self.__class__.__name__, self.name, id(self))


  77. class _TAG_Numeric(TAG):
  78.     """_TAG_Numeric, comparable to int with an intrinsic name"""

  79.     def __init__(self, value=None, name=None, buffer=None):
  80.   super(_TAG_Numeric, self).__init__(value, name)
  81.   if buffer:
  82.    self._parse_buffer(buffer)

  83.     # Parsers and Generators
  84.     def _parse_buffer(self, buffer):
  85.   # Note: buffer.read() may raise an IOError, for example if buffer is a
  86.   # corrupt gzip.GzipFile
  87.   self.value = self.fmt.unpack(buffer.read(self.fmt.size))[0]

  88.     def _render_buffer(self, buffer):
  89.   buffer.write(self.fmt.pack(self.value))


  90. class _TAG_End(TAG):
  91.     id = TAG_END
  92.     fmt = Struct(">b")

  93.     def _parse_buffer(self, buffer):
  94.   # Note: buffer.read() may raise an IOError, for example if buffer is a
  95.   # corrupt gzip.GzipFile
  96.   value = self.fmt.unpack(buffer.read(1))[0]
  97.   if value != 0:
  98.    raise ValueError(
  99.     "A Tag End must be rendered as '0', not as '%d'." % value)

  100.     def _render_buffer(self, buffer):
  101.   buffer.write(b'\x00')


  102. # == Value Tags ==#
  103. class TAG_Byte(_TAG_Numeric):
  104.     """Represent a single tag storing 1 byte."""
  105.     id = TAG_BYTE
  106.     fmt = Struct(">b")


  107. class TAG_Short(_TAG_Numeric):
  108.     """Represent a single tag storing 1 short."""
  109.     id = TAG_SHORT
  110.     fmt = Struct(">h")


  111. class TAG_Int(_TAG_Numeric):
  112.     """Represent a single tag storing 1 int."""
  113.     id = TAG_INT
  114.     fmt = Struct(">i")
  115.     """Struct(">i"), 32-bits integer, big-endian"""


  116. class TAG_Long(_TAG_Numeric):
  117.     """Represent a single tag storing 1 long."""
  118.     id = TAG_LONG
  119.     fmt = Struct(">q")


  120. class TAG_Float(_TAG_Numeric):
  121.     """Represent a single tag storing 1 IEEE-754 floating point number."""
  122.     id = TAG_FLOAT
  123.     fmt = Struct(">f")


  124. class TAG_Double(_TAG_Numeric):
  125.     """Represent a single tag storing 1 IEEE-754 double precision floating
  126.     point number."""
  127.     id = TAG_DOUBLE
  128.     fmt = Struct(">d")


  129. class TAG_Byte_Array(TAG, MutableSequence):
  130.     """
  131.     TAG_Byte_Array, comparable to a collections.UserList with
  132.     an intrinsic name whose values must be bytes
  133.     """
  134.     id = TAG_BYTE_ARRAY

  135.     def __init__(self, name=None, buffer=None):
  136.   # TODO: add a value parameter as well
  137.   super(TAG_Byte_Array, self).__init__(name=name)
  138.   if buffer:
  139.    self._parse_buffer(buffer)

  140.     # Parsers and Generators
  141.     def _parse_buffer(self, buffer):
  142.   length = TAG_Int(buffer=buffer)
  143.   self.value = bytearray(buffer.read(length.value))

  144.     def _render_buffer(self, buffer):
  145.   length = TAG_Int(len(self.value))
  146.   length._render_buffer(buffer)
  147.   buffer.write(bytes(self.value))

  148.     # Mixin methods
  149.     def __len__(self):
  150.   return len(self.value)

  151.     def __iter__(self):
  152.   return iter(self.value)

  153.     def __contains__(self, item):
  154.   return item in self.value

  155.     def __getitem__(self, key):
  156.   return self.value[key]

  157.     def __setitem__(self, key, value):
  158.   # TODO: check type of value
  159.   self.value[key] = value

  160.     def __delitem__(self, key):
  161.   del (self.value[key])

  162.     def insert(self, key, value):
  163.   # TODO: check type of value, or is this done by self.value already?
  164.   self.value.insert(key, value)

  165.     # Printing and Formatting of tree
  166.     def valuestr(self):
  167.   return "[%i byte(s)]" % len(self.value)

  168.     def __unicode__(self):
  169.   return '[' + ",".join([unicode(x) for x in self.value]) + ']'

  170.     def __str__(self):
  171.   return '[' + ",".join([str(x) for x in self.value]) + ']'


  172. class TAG_Int_Array(TAG, MutableSequence):
  173.     """
  174.     TAG_Int_Array, comparable to a collections.UserList with
  175.     an intrinsic name whose values must be integers
  176.     """
  177.     id = TAG_INT_ARRAY

  178.     def __init__(self, name=None, buffer=None):
  179.   # TODO: add a value parameter as well
  180.   super(TAG_Int_Array, self).__init__(name=name)
  181.   if buffer:
  182.    self._parse_buffer(buffer)

  183.     def update_fmt(self, length):
  184.   """ Adjust struct format description to length given """
  185.   self.fmt = Struct(">" + str(length) + "i")

  186.     # Parsers and Generators
  187.     def _parse_buffer(self, buffer):
  188.   length = TAG_Int(buffer=buffer).value
  189.   self.update_fmt(length)
  190.   self.value = list(self.fmt.unpack(buffer.read(self.fmt.size)))

  191.     def _render_buffer(self, buffer):
  192.   length = len(self.value)
  193.   self.update_fmt(length)
  194.   TAG_Int(length)._render_buffer(buffer)
  195.   buffer.write(self.fmt.pack(*self.value))

  196.     # Mixin methods
  197.     def __len__(self):
  198.   return len(self.value)

  199.     def __iter__(self):
  200.   return iter(self.value)

  201.     def __contains__(self, item):
  202.   return item in self.value

  203.     def __getitem__(self, key):
  204.   return self.value[key]

  205.     def __setitem__(self, key, value):
  206.   self.value[key] = value

  207.     def __delitem__(self, key):
  208.   del (self.value[key])

  209.     def insert(self, key, value):
  210.   self.value.insert(key, value)

  211.     # Printing and Formatting of tree
  212.     def valuestr(self):
  213.   return "[%i int(s)]" % len(self.value)


  214. class TAG_Long_Array(TAG, MutableSequence):
  215.     """
  216.     TAG_Long_Array, comparable to a collections.UserList with
  217.     an intrinsic name whose values must be integers
  218.     """
  219.     id = TAG_LONG_ARRAY

  220.     def __init__(self, name=None, buffer=None):
  221.   super(TAG_Long_Array, self).__init__(name=name)
  222.   if buffer:
  223.    self._parse_buffer(buffer)

  224.     def update_fmt(self, length):
  225.   """ Adjust struct format description to length given """
  226.   self.fmt = Struct(">" + str(length) + "q")

  227.     # Parsers and Generators
  228.     def _parse_buffer(self, buffer):
  229.   length = TAG_Int(buffer=buffer).value
  230.   self.update_fmt(length)
  231.   self.value = list(self.fmt.unpack(buffer.read(self.fmt.size)))

  232.     def _render_buffer(self, buffer):
  233.   length = len(self.value)
  234.   self.update_fmt(length)
  235.   TAG_Int(length)._render_buffer(buffer)
  236.   buffer.write(self.fmt.pack(*self.value))

  237.     # Mixin methods
  238.     def __len__(self):
  239.   return len(self.value)

  240.     def __iter__(self):
  241.   return iter(self.value)

  242.     def __contains__(self, item):
  243.   return item in self.value

  244.     def __getitem__(self, key):
  245.   return self.value[key]

  246.     def __setitem__(self, key, value):
  247.   self.value[key] = value

  248.     def __delitem__(self, key):
  249.   del (self.value[key])

  250.     def insert(self, key, value):
  251.   self.value.insert(key, value)

  252.     # Printing and Formatting of tree
  253.     def valuestr(self):
  254.   return "[%i long(s)]" % len(self.value)


  255. class TAG_String(TAG, Sequence):
  256.     """
  257.     TAG_String, comparable to a collections.UserString with an
  258.     intrinsic name
  259.     """
  260.     id = TAG_STRING

  261.     def __init__(self, value=None, name=None, buffer=None):
  262.   super(TAG_String, self).__init__(value, name)
  263.   if buffer:
  264.    self._parse_buffer(buffer)

  265.     # Parsers and Generators
  266.     def _parse_buffer(self, buffer):
  267.   length = TAG_Short(buffer=buffer)
  268.   read = buffer.read(length.value)
  269.   if len(read) != length.value:
  270.    raise StructError()
  271.   self.value = read.decode("utf-8")

  272.     def _render_buffer(self, buffer):
  273.   save_val = self.value.encode("utf-8")
  274.   length = TAG_Short(len(save_val))
  275.   length._render_buffer(buffer)
  276.   buffer.write(save_val)

  277.     # Mixin methods
  278.     def __len__(self):
  279.   return len(self.value)

  280.     def __iter__(self):
  281.   return iter(self.value)

  282.     def __contains__(self, item):
  283.   return item in self.value

  284.     def __getitem__(self, key):
  285.   return self.value[key]

  286.     # Printing and Formatting of tree
  287.     def __repr__(self):
  288.   return self.value


  289. # == Collection Tags ==#
  290. class TAG_List(TAG, MutableSequence):
  291.     """
  292.     TAG_List, comparable to a collections.UserList with an intrinsic name
  293.     """
  294.     id = TAG_LIST

  295.     def __init__(self, type=None, value=None, name=None, buffer=None):
  296.   super(TAG_List, self).__init__(value, name)
  297.   if type:
  298.    self.tagID = type.id
  299.   else:
  300.    self.tagID = None
  301.   self.tags = []
  302.   if buffer:
  303.    self._parse_buffer(buffer)
  304.   # if self.tagID == None:
  305.   #  raise ValueError("No type specified for list: %s" % (name))

  306.     # Parsers and Generators
  307.     def _parse_buffer(self, buffer):
  308.   self.tagID = TAG_Byte(buffer=buffer).value
  309.   self.tags = []
  310.   length = TAG_Int(buffer=buffer)
  311.   for x in range(length.value):
  312.    self.tags.append(TAGLIST[self.tagID](buffer=buffer))

  313.     def _render_buffer(self, buffer):
  314.   TAG_Byte(self.tagID)._render_buffer(buffer)
  315.   length = TAG_Int(len(self.tags))
  316.   length._render_buffer(buffer)
  317.   for i, tag in enumerate(self.tags):
  318.    if tag.id != self.tagID:
  319.     raise ValueError(
  320.   "List element %d(%s) has type %d != container type %d" %
  321.   (i, tag, tag.id, self.tagID))
  322.    tag._render_buffer(buffer)

  323.     # Mixin methods
  324.     def __len__(self):
  325.   return len(self.tags)

  326.     def __iter__(self):
  327.   return iter(self.tags)

  328.     def __contains__(self, item):
  329.   return item in self.tags

  330.     def __getitem__(self, key):
  331.   return self.tags[key]

  332.     def __setitem__(self, key, value):
  333.   self.tags[key] = value

  334.     def __delitem__(self, key):
  335.   del (self.tags[key])

  336.     def insert(self, key, value):
  337.   self.tags.insert(key, value)

  338.     # Printing and Formatting of tree
  339.     def __repr__(self):
  340.   return "%i entries of type %s" % (
  341.    len(self.tags), TAGLIST[self.tagID].__name__)

  342.     # Printing and Formatting of tree
  343.     def valuestr(self):
  344.   return "[%i %s(s)]" % (len(self.tags), TAGLIST[self.tagID].__name__)

  345.     def __unicode__(self):
  346.   return "[" + ", ".join([tag.tag_info() for tag in self.tags]) + "]"

  347.     def __str__(self):
  348.   return "[" + ", ".join([tag.tag_info() for tag in self.tags]) + "]"

  349.     def pretty_tree(self, indent=0):
  350.   output = [super(TAG_List, self).pretty_tree(indent)]
  351.   if len(self.tags):
  352.    output.append(("\t" * indent) + "{")
  353.    output.extend([tag.pretty_tree(indent + 1) for tag in self.tags])
  354.    output.append(("\t" * indent) + "}")
  355.   return '\n'.join(output)


  356. class TAG_Compound(TAG, MutableMapping):
  357.     """
  358.     TAG_Compound, comparable to a collections.OrderedDict with an
  359.     intrinsic name
  360.     """
  361.     id = TAG_COMPOUND

  362.     def __init__(self, buffer=None, name=None):
  363.   # TODO: add a value parameter as well
  364.   super(TAG_Compound, self).__init__()
  365.   self.tags = []
  366.   self.name = ""
  367.   if buffer:
  368.    self._parse_buffer(buffer)

  369.     # Parsers and Generators
  370.     def _parse_buffer(self, buffer):
  371.   while True:
  372.    type = TAG_Byte(buffer=buffer)
  373.    if type.value == TAG_END:
  374.     # print("found tag_end")
  375.     break
  376.    else:
  377.     name = TAG_String(buffer=buffer).value
  378.     try:
  379.   tag = TAGLIST[type.value]()
  380.     except KeyError:
  381.   raise ValueError("Unrecognised tag type %d" % type.value)
  382.     tag.name = name
  383.     self.tags.append(tag)
  384.     tag._parse_buffer(buffer)

  385.     def _render_buffer(self, buffer):
  386.   for tag in self.tags:
  387.    TAG_Byte(tag.id)._render_buffer(buffer)
  388.    TAG_String(tag.name)._render_buffer(buffer)
  389.    tag._render_buffer(buffer)
  390.   buffer.write(b'\x00')# write TAG_END

  391.     # Mixin methods
  392.     def __len__(self):
  393.   return len(self.tags)

  394.     def __iter__(self):
  395.   for key in self.tags:
  396.    yield key.name

  397.     def __contains__(self, key):
  398.   if isinstance(key, int):
  399.    return key <= len(self.tags)
  400.   elif isinstance(key, basestring):
  401.    for tag in self.tags:
  402.     if tag.name == key:
  403.   return True
  404.    return False
  405.   elif isinstance(key, TAG):
  406.    return key in self.tags
  407.   return False

  408.     def __getitem__(self, key):
  409.   if isinstance(key, int):
  410.    return self.tags[key]
  411.   elif isinstance(key, basestring):
  412.    for tag in self.tags:
  413.     if tag.name == key:
  414.   return tag
  415.    else:
  416.     raise KeyError("Tag %s does not exist" % key)
  417.   else:
  418.    raise TypeError(
  419.     "key needs to be either name of tag, or index of tag, "
  420.     "not a %s" % type(key).__name__)

  421.     def __setitem__(self, key, value):
  422.   assert isinstance(value, TAG), "value must be an nbt.TAG"
  423.   if isinstance(key, int):
  424.    # Just try it. The proper error will be raised if it doesn't work.
  425.    self.tags[key] = value
  426.   elif isinstance(key, basestring):
  427.    value.name = key
  428.    for i, tag in enumerate(self.tags):
  429.     if tag.name == key:
  430.   self.tags = value
  431.   return
  432.    self.tags.append(value)

  433.     def __delitem__(self, key):
  434.   if isinstance(key, int):
  435.    del (self.tags[key])
  436.   elif isinstance(key, basestring):
  437.    self.tags.remove(self.__getitem__(key))
  438.   else:
  439.    raise ValueError(
  440.     "key needs to be either name of tag, or index of tag")

  441.     def keys(self):
  442.   return [tag.name for tag in self.tags]

  443.     def iteritems(self):
  444.   for tag in self.tags:
  445.    yield (tag.name, tag)

  446.     # Printing and Formatting of tree
  447.     def __unicode__(self):
  448.   return "{" + ", ".join([tag.tag_info() for tag in self.tags]) + "}"

  449.     def __str__(self):
  450.   return "{" + ", ".join([tag.tag_info() for tag in self.tags]) + "}"

  451.     def valuestr(self):
  452.   return '{%i Entries}' % len(self.tags)

  453.     def pretty_tree(self, indent=0):
  454.   output = [super(TAG_Compound, self).pretty_tree(indent)]
  455.   if len(self.tags):
  456.    output.append(("\t" * indent) + "{")
  457.    output.extend([tag.pretty_tree(indent + 1) for tag in self.tags])
  458.    output.append(("\t" * indent) + "}")
  459.   return '\n'.join(output)


  460. TAGLIST = {TAG_END: _TAG_End, TAG_BYTE: TAG_Byte, TAG_SHORT: TAG_Short,
  461.   TAG_INT: TAG_Int, TAG_LONG: TAG_Long, TAG_FLOAT: TAG_Float,
  462.   TAG_DOUBLE: TAG_Double, TAG_BYTE_ARRAY: TAG_Byte_Array,
  463.   TAG_STRING: TAG_String, TAG_LIST: TAG_List,
  464.   TAG_COMPOUND: TAG_Compound, TAG_INT_ARRAY: TAG_Int_Array,
  465.   TAG_LONG_ARRAY: TAG_Long_Array}


  466. class NBTFile(TAG_Compound):
  467.     """Represent an NBT file object."""

  468.     def __init__(self, filename=None, buffer=None, fileobj=None):
  469.   """
  470.   Create a new NBTFile object.
  471.   Specify either a filename, file object or data buffer.
  472.   If filename of file object is specified, data should be GZip-compressed.
  473.   If a data buffer is specified, it is assumed to be uncompressed.
  474.   If filename is specified, the file is closed after reading and writing.
  475.   If file object is specified, the caller is responsible for closing the
  476.   file.
  477.   """
  478.   super(NBTFile, self).__init__()
  479.   self.filename = filename
  480.   self.type = TAG_Byte(self.id)
  481.   closefile = True
  482.   # make a file object
  483.   if filename:
  484.    self.filename = filename
  485.    self.file = GzipFile(filename, 'rb')
  486.   elif buffer:
  487.    if hasattr(buffer, 'name'):
  488.     self.filename = buffer.name
  489.    self.file = buffer
  490.    closefile = False
  491.   elif fileobj:
  492.    if hasattr(fileobj, 'name'):
  493.     self.filename = fileobj.name
  494.    self.file = GzipFile(fileobj=fileobj)
  495.   else:
  496.    self.file = None
  497.    closefile = False
  498.   # parse the file given initially
  499.   if self.file:
  500.    self.parse_file()
  501.    if closefile:
  502.     # Note: GzipFile().close() does NOT close the fileobj,
  503.     # So we are still responsible for closing that.
  504.     try:
  505.   self.file.close()
  506.     except (AttributeError, IOError):
  507.   pass
  508.    self.file = None

  509.     def parse_file(self, filename=None, buffer=None, fileobj=None):
  510.   """Completely parse a file, extracting all tags."""
  511.   if filename:
  512.    self.file = GzipFile(filename, 'rb')
  513.   elif buffer:
  514.    if hasattr(buffer, 'name'):
  515.     self.filename = buffer.name
  516.    self.file = buffer
  517.   elif fileobj:
  518.    if hasattr(fileobj, 'name'):
  519.     self.filename = fileobj.name
  520.    self.file = GzipFile(fileobj=fileobj)
  521.   if self.file:
  522.    try:
  523.     type = TAG_Byte(buffer=self.file)
  524.     if type.value == self.id:
  525.   name = TAG_String(buffer=self.file).value
  526.   self._parse_buffer(self.file)
  527.   self.name = name
  528.   self.file.close()
  529.     else:
  530.   raise MalformedFileError(
  531.    "First record is not a Compound Tag")
  532.    except StructError as e:
  533.     raise MalformedFileError(
  534.   "Partial File Parse: file possibly truncated.")
  535.   else:
  536.    raise ValueError(
  537.     "NBTFile.parse_file(): Need to specify either a "
  538.     "filename or a file object"
  539.    )

  540.     def write_file(self, filename=None, buffer=None, fileobj=None):
  541.   """Write this NBT file to a file."""
  542.   closefile = True
  543.   if buffer:
  544.    self.filename = None
  545.    self.file = buffer
  546.    closefile = False
  547.   elif filename:
  548.    self.filename = filename
  549.    self.file = GzipFile(filename, "wb")
  550.   elif fileobj:
  551.    self.filename = None
  552.    self.file = GzipFile(fileobj=fileobj, mode="wb")
  553.   elif self.filename:
  554.    self.file = GzipFile(self.filename, "wb")
  555.   elif not self.file:
  556.    raise ValueError(
  557.     "NBTFile.write_file(): Need to specify either a "
  558.     "filename or a file object"
  559.    )
  560.   # Render tree to file
  561.   TAG_Byte(self.id)._render_buffer(self.file)
  562.   TAG_String(self.name)._render_buffer(self.file)
  563.   self._render_buffer(self.file)
  564.   # make sure the file is complete
  565.   try:
  566.    self.file.flush()
  567.   except (AttributeError, IOError):
  568.    pass
  569.   if closefile:
  570.    try:
  571.     self.file.close()
  572.    except (AttributeError, IOError):
  573.     pass

  574.     def __repr__(self):
  575.   """
  576.   Return a string (ascii formated for Python 2, unicode
  577.   for Python 3) describing the class, name and id for
  578.   debugging purposes.
  579.   """
  580.   if self.filename:
  581.    return "<%s(%r) with %s(%r) at 0x%x>" % (
  582.     self.__class__.__name__, self.filename,
  583.     TAG_Compound.__name__, self.name, id(self)
  584.    )
  585.   else:
  586.    return "<%s with %s(%r) at 0x%x>" % (
  587.     self.__class__.__name__, TAG_Compound.__name__,
  588.     self.name, id(self)
  589.    )



nbt/region.py


代码:



  1. """
  2. Handle a region file, containing 32x32 chunks.
  3. For more information about the region file format:
  4. https://minecraft.gamepedia.com/Region_file_format
  5. """

  6. from .nbt import NBTFile, MalformedFileError
  7. from struct import pack, unpack
  8. from collections import Mapping
  9. import zlib
  10. import gzip
  11. from io import BytesIO
  12. import time
  13. from os import SEEK_END

  14. # constants

  15. SECTOR_LENGTH = 4096
  16. """Constant indicating the length of a sector. A Region file is divided in sectors of 4096 bytes each."""

  17. # TODO: move status codes to an (Enum) object

  18. # Status is a number representing:
  19. # -5 = Error, the chunk is overlapping with another chunk
  20. # -4 = Error, the chunk length is too large to fit in the sector length in the region header
  21. # -3 = Error, chunk header has a 0 length
  22. # -2 = Error, chunk inside the header of the region file
  23. # -1 = Error, chunk partially/completely outside of file
  24. #0 = Ok
  25. #1 = Chunk non-existant yet
  26. STATUS_CHUNK_OVERLAPPING = -5
  27. """Constant indicating an error status: the chunk is allocated to a sector already occupied by another chunk"""
  28. STATUS_CHUNK_MISMATCHED_LENGTHS = -4
  29. """Constant indicating an error status: the region header length and the chunk length are incompatible"""
  30. STATUS_CHUNK_ZERO_LENGTH = -3
  31. """Constant indicating an error status: chunk header has a 0 length"""
  32. STATUS_CHUNK_IN_HEADER = -2
  33. """Constant indicating an error status: chunk inside the header of the region file"""
  34. STATUS_CHUNK_OUT_OF_FILE = -1
  35. """Constant indicating an error status: chunk partially/completely outside of file"""
  36. STATUS_CHUNK_OK = 0
  37. """Constant indicating an normal status: the chunk exists and the metadata is valid"""
  38. STATUS_CHUNK_NOT_CREATED = 1
  39. """Constant indicating an normal status: the chunk does not exist"""

  40. COMPRESSION_NONE = 0
  41. """Constant indicating that the chunk is not compressed."""
  42. COMPRESSION_GZIP = 1
  43. """Constant indicating that the chunk is GZip compressed."""
  44. COMPRESSION_ZLIB = 2
  45. """Constant indicating that the chunk is zlib compressed."""


  46. # TODO: reconsider these errors. where are they catched? Where would an implementation make a difference in handling the different exceptions.

  47. class RegionFileFormatError(Exception):
  48.     """Base class for all file format errors.
  49.     Note: InconceivedChunk is not a child class, because it is not considered a format error."""
  50.     def __init__(self, msg=""):
  51.   self.msg = msg
  52.     def __str__(self):
  53.   return self.msg

  54. class NoRegionHeader(RegionFileFormatError):
  55.     """The size of the region file is too small to contain a header."""

  56. class RegionHeaderError(RegionFileFormatError):
  57.     """Error in the header of the region file for a given chunk."""

  58. class ChunkHeaderError(RegionFileFormatError):
  59.     """Error in the header of a chunk, included the bytes of length and byte version."""

  60. class ChunkDataError(RegionFileFormatError):
  61.     """Error in the data of a chunk."""

  62. class InconceivedChunk(LookupError):
  63.     """Specified chunk has not yet been generated."""
  64.     def __init__(self, msg=""):
  65.   self.msg = msg


  66. class ChunkMetadata(object):
  67.     """
  68.     Metadata for a particular chunk found in the 8 kiByte header and 5-byte chunk header.
  69.     """

  70.     def __init__(self, x, z):
  71.   self.x = x
  72.   """x-coordinate of the chunk in the file"""
  73.   self.z = z
  74.   """z-coordinate of the chunk in the file"""
  75.   self.blockstart = 0
  76.   """start of the chunk block, counted in 4 kiByte sectors from the
  77.   start of the file. (24 bit int)"""
  78.   self.blocklength = 0
  79.   """amount of 4 kiBytes sectors in the block (8 bit int)"""
  80.   self.timestamp = 0
  81.   """a Unix timestamps (seconds since epoch) (32 bits), found in the
  82.   second sector in the file."""
  83.   self.length = 0
  84.   """length of the block in bytes. This excludes the 4-byte length header,
  85.   and includes the 1-byte compression byte. (32 bit int)"""
  86.   self.compression = None
  87.   """type of compression used for the chunk block. (8 bit int).
  88.    
  89.   - 0: uncompressed
  90.   - 1: gzip compression
  91.   - 2: zlib compression"""
  92.   self.status = STATUS_CHUNK_NOT_CREATED
  93.   """status as determined from blockstart, blocklength, length, file size
  94.   and location of other chunks in the file.
  95.  
  96.   - STATUS_CHUNK_OVERLAPPING
  97.   - STATUS_CHUNK_MISMATCHED_LENGTHS
  98.   - STATUS_CHUNK_ZERO_LENGTH
  99.   - STATUS_CHUNK_IN_HEADER
  100.   - STATUS_CHUNK_OUT_OF_FILE
  101.   - STATUS_CHUNK_OK
  102.   - STATUS_CHUNK_NOT_CREATED"""
  103.     def __str__(self):
  104.   return "%s(%d, %d, sector=%s, blocklength=%s, timestamp=%s, bytelength=%s, compression=%s, status=%s)" % \
  105.    (self.__class__.__name__, self.x, self.z, self.blockstart, self.blocklength, self.timestamp, \
  106.    self.length, self.compression, self.status)
  107.     def __repr__(self):
  108.   return "%s(%d,%d)" % (self.__class__.__name__, self.x, self.z)
  109.     def requiredblocks(self):
  110.   # slightly faster variant of: floor(self.length + 4) / 4096))
  111.   return (self.length + 3 + SECTOR_LENGTH) // SECTOR_LENGTH
  112.     def is_created(self):
  113.   """return True if this chunk is created according to the header.
  114.   This includes chunks which are not readable for other reasons."""
  115.   return self.blockstart != 0

  116. class _HeaderWrapper(Mapping):
  117.     """Wrapper around self.metadata to emulate the old self.header variable"""
  118.     def __init__(self, metadata):
  119.   self.metadata = metadata
  120.     def __getitem__(self, xz):
  121.   m = self.metadata[xz]
  122.   return (m.blockstart, m.blocklength, m.timestamp, m.status)
  123.     def __iter__(self):
  124.   return iter(self.metadata) # iterates over the keys
  125.     def __len__(self):
  126.   return len(self.metadata)
  127. class _ChunkHeaderWrapper(Mapping):
  128.     """Wrapper around self.metadata to emulate the old self.chunk_headers variable"""
  129.     def __init__(self, metadata):
  130.   self.metadata = metadata
  131.     def __getitem__(self, xz):
  132.   m = self.metadata[xz]
  133.   return (m.length if m.length > 0 else None, m.compression, m.status)
  134.     def __iter__(self):
  135.   return iter(self.metadata) # iterates over the keys
  136.     def __len__(self):
  137.   return len(self.metadata)

  138. class Location(object):
  139.     def __init__(self, x=None, y=None, z=None):
  140.   self.x = x
  141.   self.y = y
  142.   self.z = z
  143.     def __str__(self):
  144.   return "%s(x=%s, y=%s, z=%s)" % (self.__class__.__name__, self.x, self.y, self.z)

  145. class RegionFile(object):
  146.     """A convenience class for extracting NBT files from the Minecraft Beta Region Format."""
  147.    
  148.     # Redefine constants for backward compatibility.
  149.     STATUS_CHUNK_OVERLAPPING = STATUS_CHUNK_OVERLAPPING
  150.     """Constant indicating an error status: the chunk is allocated to a sector
  151.     already occupied by another chunk.
  152.     Deprecated. Use :const:`nbt.region.STATUS_CHUNK_OVERLAPPING` instead."""
  153.     STATUS_CHUNK_MISMATCHED_LENGTHS = STATUS_CHUNK_MISMATCHED_LENGTHS
  154.     """Constant indicating an error status: the region header length and the chunk
  155.     length are incompatible. Deprecated. Use :const:`nbt.region.STATUS_CHUNK_MISMATCHED_LENGTHS` instead."""
  156.     STATUS_CHUNK_ZERO_LENGTH = STATUS_CHUNK_ZERO_LENGTH
  157.     """Constant indicating an error status: chunk header has a 0 length.
  158.     Deprecated. Use :const:`nbt.region.STATUS_CHUNK_ZERO_LENGTH` instead."""
  159.     STATUS_CHUNK_IN_HEADER = STATUS_CHUNK_IN_HEADER
  160.     """Constant indicating an error status: chunk inside the header of the region file.
  161.     Deprecated. Use :const:`nbt.region.STATUS_CHUNK_IN_HEADER` instead."""
  162.     STATUS_CHUNK_OUT_OF_FILE = STATUS_CHUNK_OUT_OF_FILE
  163.     """Constant indicating an error status: chunk partially/completely outside of file.
  164.     Deprecated. Use :const:`nbt.region.STATUS_CHUNK_OUT_OF_FILE` instead."""
  165.     STATUS_CHUNK_OK = STATUS_CHUNK_OK
  166.     """Constant indicating an normal status: the chunk exists and the metadata is valid.
  167.     Deprecated. Use :const:`nbt.region.STATUS_CHUNK_OK` instead."""
  168.     STATUS_CHUNK_NOT_CREATED = STATUS_CHUNK_NOT_CREATED
  169.     """Constant indicating an normal status: the chunk does not exist.
  170.     Deprecated. Use :const:`nbt.region.STATUS_CHUNK_NOT_CREATED` instead."""
  171.    
  172.     def __init__(self, filename=None, fileobj=None, chunkclass = None):
  173.   """
  174.   Read a region file by filename or file object.
  175.   If a fileobj is specified, it is not closed after use; it is the callers responibility to close it.
  176.   """
  177.   self.file = None
  178.   self.filename = None
  179.   self._closefile = False
  180.   self.chunkclass = chunkclass
  181.   if filename:
  182.    self.filename = filename
  183.    self.file = open(filename, 'r+b') # open for read and write in binary mode
  184.    self._closefile = True
  185.   elif fileobj:
  186.    if hasattr(fileobj, 'name'):
  187.     self.filename = fileobj.name
  188.    self.file = fileobj
  189.   elif not self.file:
  190.    raise ValueError("RegionFile(): Need to specify either a filename or a file object")

  191.   # Some variables
  192.   self.metadata = {}
  193.   """
  194.   dict containing ChunkMetadata objects, gathered from metadata found in the
  195.   8 kiByte header and 5-byte chunk header.
  196.  
  197.   ``metadata[x, z]: ChunkMetadata()``
  198.   """
  199.   self.header = _HeaderWrapper(self.metadata)
  200.   """
  201.   dict containing the metadata found in the 8 kiByte header:
  202.  
  203.   ``header[x, z]: (offset, sectionlength, timestamp, status)``
  204.  
  205.   :offset: counts in 4 kiByte sectors, starting from the start of the file. (24 bit int)
  206.   :blocklength: is in 4 kiByte sectors (8 bit int)
  207.   :timestamp: is a Unix timestamps (seconds since epoch) (32 bits)
  208.   :status: can be any of:
  209.  
  210.    - STATUS_CHUNK_OVERLAPPING
  211.    - STATUS_CHUNK_MISMATCHED_LENGTHS
  212.    - STATUS_CHUNK_ZERO_LENGTH
  213.    - STATUS_CHUNK_IN_HEADER
  214.    - STATUS_CHUNK_OUT_OF_FILE
  215.    - STATUS_CHUNK_OK
  216.    - STATUS_CHUNK_NOT_CREATED
  217.  
  218.   Deprecated. Use :attr:`metadata` instead.
  219.   """
  220.   self.chunk_headers = _ChunkHeaderWrapper(self.metadata)
  221.   """
  222.   dict containing the metadata found in each chunk block:
  223.  
  224.   ``chunk_headers[x, z]: (length, compression, chunk_status)``
  225.  
  226.   :chunk length: in bytes, starting from the compression byte (32 bit int)
  227.   :compression: is 1 (Gzip) or 2 (bzip) (8 bit int)
  228.   :chunk_status: is equal to status in :attr:`header`.
  229.  
  230.   If the chunk is not defined, the tuple is (None, None, STATUS_CHUNK_NOT_CREATED)
  231.  
  232.   Deprecated. Use :attr:`metadata` instead.
  233.   """

  234.   self.loc = Location()
  235.   """Optional: x,z location of a region within a world."""
  236.  
  237.   self._init_header()
  238.   self._parse_header()
  239.   self._parse_chunk_headers()

  240.     def get_size(self):
  241.   """ Returns the file size in bytes. """
  242.   # seek(0,2) jumps to 0-bytes from the end of the file.
  243.   # Python 2.6 support: seek does not yet return the position.
  244.   self.file.seek(0, SEEK_END)
  245.   return self.file.tell()

  246.     @staticmethod
  247.     def _bytes_to_sector(bsize, sectorlength=SECTOR_LENGTH):
  248.   """Given a size in bytes, return how many sections of length sectorlen are required to contain it.
  249.   This is equivalent to ceil(bsize/sectorlen), if Python would use floating
  250.   points for division, and integers for ceil(), rather than the other way around."""
  251.   sectors, remainder = divmod(bsize, sectorlength)
  252.   return sectors if remainder == 0 else sectors + 1
  253.    
  254.     def close(self):
  255.   """
  256.   Clean up resources after use.
  257.  
  258.   Note that the instance is no longer readable nor writable after calling close().
  259.   The method is automatically called by garbage collectors, but made public to
  260.   allow explicit cleanup.
  261.   """
  262.   if self._closefile:
  263.    try:
  264.     self.file.close()
  265.    except IOError:
  266.     pass

  267.     def __del__(self):
  268.   self.close()
  269.   # Parent object() has no __del__ method, otherwise it should be called here.

  270.     def _init_file(self):
  271.   """Initialise the file header. This will erase any data previously in the file."""
  272.   header_length = 2*SECTOR_LENGTH
  273.   if self.size > header_length:
  274.    self.file.truncate(header_length)
  275.   self.file.seek(0)
  276.   self.file.write(header_length*b'\x00')
  277.   self.size = header_length

  278.     def _init_header(self):
  279.   for x in range(32):
  280.    for z in range(32):
  281.     self.metadata[x,z] = ChunkMetadata(x, z)

  282.     def _parse_header(self):
  283.   """Read the region header and stores: offset, length and status."""
  284.   # update the file size, needed when parse_header is called after
  285.   # we have unlinked a chunk or writed a new one
  286.   self.size = self.get_size()

  287.   if self.size == 0:
  288.    # Some region files seems to have 0 bytes of size, and
  289.    # Minecraft handle them without problems. Take them
  290.    # as empty region files.
  291.    return
  292.   elif self.size < 2*SECTOR_LENGTH:
  293.    raise NoRegionHeader('The region file is %d bytes, too small in size to have a header.' % self.size)
  294.  
  295.   for index in range(0, SECTOR_LENGTH, 4):
  296.    x = int(index//4) % 32
  297.    z = int(index//4)//32
  298.    m = self.metadata[x, z]
  299.    
  300.    self.file.seek(index)
  301.    offset, length = unpack(">IB", b"\0" + self.file.read(4))
  302.    m.blockstart, m.blocklength = offset, length
  303.    self.file.seek(index + SECTOR_LENGTH)
  304.    m.timestamp = unpack(">I", self.file.read(4))[0]
  305.    
  306.    if offset == 0 and length == 0:
  307.     m.status = STATUS_CHUNK_NOT_CREATED
  308.    elif length == 0:
  309.     m.status = STATUS_CHUNK_ZERO_LENGTH
  310.    elif offset < 2 and offset != 0:
  311.     m.status = STATUS_CHUNK_IN_HEADER
  312.    elif SECTOR_LENGTH * offset + 5 > self.size:
  313.     # Chunk header can't be read.
  314.     m.status = STATUS_CHUNK_OUT_OF_FILE
  315.    else:
  316.     m.status = STATUS_CHUNK_OK
  317.  
  318.   # Check for chunks overlapping in the file
  319.   for chunks in self._sectors()[2:]:
  320.    if len(chunks) > 1:
  321.     # overlapping chunks
  322.     for m in chunks:
  323.   # Update status, unless these more severe errors take precedence
  324.   if m.status not in (STATUS_CHUNK_ZERO_LENGTH, STATUS_CHUNK_IN_HEADER,
  325.     STATUS_CHUNK_OUT_OF_FILE):
  326.    m.status = STATUS_CHUNK_OVERLAPPING

  327.     def _parse_chunk_headers(self):
  328.   for x in range(32):
  329.    for z in range(32):
  330.     m = self.metadata[x, z]
  331.     if m.status not in (STATUS_CHUNK_OK, STATUS_CHUNK_OVERLAPPING, \
  332.    STATUS_CHUNK_MISMATCHED_LENGTHS):
  333.   # skip to next if status is NOT_CREATED, OUT_OF_FILE, IN_HEADER,
  334.   # ZERO_LENGTH or anything else.
  335.   continue
  336.     try:
  337.   self.file.seek(m.blockstart*SECTOR_LENGTH) # offset comes in sectors of 4096 bytes
  338.   length = unpack(">I", self.file.read(4))
  339.   m.length = length[0] # unpack always returns a tuple, even unpacking one element
  340.   compression = unpack(">B",self.file.read(1))
  341.   m.compression = compression[0]
  342.     except IOError:
  343.   m.status = STATUS_CHUNK_OUT_OF_FILE
  344.   continue
  345.     if m.blockstart*SECTOR_LENGTH + m.length + 4 > self.size:
  346.   m.status = STATUS_CHUNK_OUT_OF_FILE
  347.     elif m.length <= 1: # chunk can't be zero length
  348.   m.status = STATUS_CHUNK_ZERO_LENGTH
  349.     elif m.length + 4 > m.blocklength * SECTOR_LENGTH:
  350.   # There are not enough sectors allocated for the whole block
  351.   m.status = STATUS_CHUNK_MISMATCHED_LENGTHS

  352.     def _sectors(self, ignore_chunk=None):
  353.   """
  354.   Return a list of all sectors, each sector is a list of chunks occupying the block.
  355.   """
  356.   sectorsize = self._bytes_to_sector(self.size)
  357.   sectors = [[] for s in range(sectorsize)]
  358.   sectors[0] = True # locations
  359.   sectors[1] = True # timestamps
  360.   for m in self.metadata.values():
  361.    if not m.is_created():
  362.     continue
  363.    if ignore_chunk == m:
  364.     continue
  365.    if m.blocklength and m.blockstart:
  366.     blockend = m.blockstart + max(m.blocklength, m.requiredblocks())
  367.     # Ensure 2 <= b < sectorsize, as well as m.blockstart <= b < blockend
  368.     for b in range(max(m.blockstart, 2), min(blockend, sectorsize)):
  369.   sectors.append(m)
  370.   return sectors

  371.     def _locate_free_sectors(self, ignore_chunk=None):
  372.   """Return a list of booleans, indicating the free sectors."""
  373.   sectors = self._sectors(ignore_chunk=ignore_chunk)
  374.   # Sectors are considered free, if the value is an empty list.
  375.   return [not i for i in sectors]

  376.     def _find_free_location(self, free_locations, required_sectors=1, preferred=None):
  377.   """
  378.   Given a list of booleans, find a list of <required_sectors> consecutive True values.
  379.   If no such list is found, return length(free_locations).
  380.   Assumes first two values are always False.
  381.   """
  382.   # check preferred (current) location
  383.   if preferred and all(free_locations[preferred:preferred+required_sectors]):
  384.    return preferred
  385.  
  386.   # check other locations
  387.   # Note: the slicing may exceed the free_location boundary.
  388.   # This implementation relies on the fact that slicing will work anyway,
  389.   # and the any() function returns True for an empty list. This ensures
  390.   # that blocks outside the file are considered Free as well.
  391.  
  392.   i = 2 # First two sectors are in use by the header
  393.   while i < len(free_locations):
  394.    if all(free_locations[i:i+required_sectors]):
  395.     break
  396.    i += 1
  397.   return i

  398.     def get_metadata(self):
  399.   """
  400.   Return a list of the metadata of each chunk that is defined in te regionfile.
  401.   This includes chunks which may not be readable for whatever reason,
  402.   but excludes chunks that are not yet defined.
  403.   """
  404.   return [m for m in self.metadata.values() if m.is_created()]

  405.     def get_chunks(self):
  406.   """
  407.   Return the x,z coordinates and length of the chunks that are defined in te regionfile.
  408.   This includes chunks which may not be readable for whatever reason.
  409.   Warning: despite the name, this function does not actually return the chunk,
  410.   but merely it's metadata. Use get_chunk(x,z) to get the NBTFile, and then Chunk()
  411.   to get the actual chunk.
  412.  
  413.   This method is deprecated. Use :meth:`get_metadata` instead.
  414.   """
  415.   return self.get_chunk_coords()

  416.     def get_chunk_coords(self):
  417.   """
  418.   Return the x,z coordinates and length of the chunks that are defined in te regionfile.
  419.   This includes chunks which may not be readable for whatever reason.
  420.  
  421.   This method is deprecated. Use :meth:`get_metadata` instead.
  422.   """
  423.   chunks = []
  424.   for x in range(32):
  425.    for z in range(32):
  426.     m = self.metadata[x,z]
  427.     if m.is_created():
  428.   chunks.append({'x': x, 'z': z, 'length': m.blocklength})
  429.   return chunks

  430.     def iter_chunks(self):
  431.   """
  432.   Yield each readable chunk present in the region.
  433.   Chunks that can not be read for whatever reason are silently skipped.
  434.   Warning: this function returns a :class:`nbt.nbt.NBTFile` object, use ``Chunk(nbtfile)`` to get a
  435.   :class:`nbt.chunk.Chunk` instance.
  436.   """
  437.   for m in self.get_metadata():
  438.    try:
  439.     yield self.get_chunk(m.x, m.z)
  440.    except RegionFileFormatError:
  441.     pass

  442.     # The following method will replace 'iter_chunks'
  443.     # but the previous is kept for the moment
  444.     # until the users update their code

  445.     def iter_chunks_class(self):
  446.   """
  447.   Yield each readable chunk present in the region.
  448.   Chunks that can not be read for whatever reason are silently skipped.
  449.   This function returns a :class:`nbt.chunk.Chunk` instance.
  450.   """
  451.   for m in self.get_metadata():
  452.    try:
  453.     yield self.chunkclass(self.get_chunk(m.x, m.z))
  454.    except RegionFileFormatError:
  455.     pass

  456.     def __iter__(self):
  457.   return self.iter_chunks()

  458.     def get_timestamp(self, x, z):
  459.   """
  460.   Return the timestamp of when this region file was last modified.
  461.  
  462.   Note that this returns the timestamp as-is. A timestamp may exist,
  463.   while the chunk does not, or it may return a timestamp of 0 even
  464.   while the chunk exists.
  465.  
  466.   To convert to an actual date, use `datetime.fromtimestamp()`.
  467.   """
  468.   return self.metadata[x,z].timestamp

  469.     def chunk_count(self):
  470.   """Return the number of defined chunks. This includes potentially corrupt chunks."""
  471.   return len(self.get_metadata())

  472.     def get_blockdata(self, x, z):
  473.   """
  474.   Return the decompressed binary data representing a chunk.
  475.  
  476.   May raise a RegionFileFormatError().
  477.   If decompression of the data succeeds, all available data is returned,
  478.   even if it is shorter than what is specified in the header (e.g. in case
  479.   of a truncated while and non-compressed data).
  480.   """
  481.   # read metadata block
  482.   m = self.metadata[x, z]
  483.   if m.status == STATUS_CHUNK_NOT_CREATED:
  484.    raise InconceivedChunk("Chunk %d,%d is not present in region" % (x,z))
  485.   elif m.status == STATUS_CHUNK_IN_HEADER:
  486.    raise RegionHeaderError('Chunk %d,%d is in the region header' % (x,z))
  487.   elif m.status == STATUS_CHUNK_OUT_OF_FILE and (m.length <= 1 or m.compression == None):
  488.    # Chunk header is outside of the file.
  489.    raise RegionHeaderError('Chunk %d,%d is partially/completely outside the file' % (x,z))
  490.   elif m.status == STATUS_CHUNK_ZERO_LENGTH:
  491.    if m.blocklength == 0:
  492.     raise RegionHeaderError('Chunk %d,%d has zero length' % (x,z))
  493.    else:
  494.     raise ChunkHeaderError('Chunk %d,%d has zero length' % (x,z))
  495.   elif m.blockstart * SECTOR_LENGTH + 5 >= self.size:
  496.    raise RegionHeaderError('Chunk %d,%d is partially/completely outside the file' % (x,z))

  497.   # status is STATUS_CHUNK_OK, STATUS_CHUNK_MISMATCHED_LENGTHS, STATUS_CHUNK_OVERLAPPING
  498.   # or STATUS_CHUNK_OUT_OF_FILE.
  499.   # The chunk is always read, but in case of an error, the exception may be different
  500.   # based on the status.

  501.   err = None
  502.   try:
  503.    # offset comes in sectors of 4096 bytes + length bytes + compression byte
  504.    self.file.seek(m.blockstart * SECTOR_LENGTH + 5)
  505.    # Do not read past the length of the file.
  506.    # The length in the file includes the compression byte, hence the -1.
  507.    length = min(m.length - 1, self.size - (m.blockstart * SECTOR_LENGTH + 5))
  508.    chunk = self.file.read(length)
  509.    
  510.    if (m.compression == COMPRESSION_GZIP):
  511.     # Python 3.1 and earlier do not yet support gzip.decompress(chunk)
  512.     f = gzip.GzipFile(fileobj=BytesIO(chunk))
  513.     chunk = bytes(f.read())
  514.     f.close()
  515.    elif (m.compression == COMPRESSION_ZLIB):
  516.     chunk = zlib.decompress(chunk)
  517.    elif m.compression != COMPRESSION_NONE:
  518.     raise ChunkDataError('Unknown chunk compression/format (%s)' % m.compression)
  519.    
  520.    return chunk
  521.   except RegionFileFormatError:
  522.    raise
  523.   except Exception as e:
  524.    # Deliberately catch the Exception and re-raise.
  525.    # The details in gzip/zlib/nbt are irrelevant, just that the data is garbled.
  526.    err = '%s' % e # avoid str(e) due to Unicode issues in Python 2.
  527.   if err:
  528.    # don't raise during exception handling to avoid the warning
  529.    # "During handling of the above exception, another exception occurred".
  530.    # Python 3.3 solution (see PEP 409 & 415): "raise ChunkDataError(str(e)) from None"
  531.    if m.status == STATUS_CHUNK_MISMATCHED_LENGTHS:
  532.     raise ChunkHeaderError('The length in region header and the length in the header of chunk %d,%d are incompatible' % (x,z))
  533.    elif m.status == STATUS_CHUNK_OVERLAPPING:
  534.     raise ChunkHeaderError('Chunk %d,%d is overlapping with another chunk' % (x,z))
  535.    else:
  536.     raise ChunkDataError(err)

  537.     def get_nbt(self, x, z):
  538.   """
  539.   Return a NBTFile of the specified chunk.
  540.   Raise InconceivedChunk if the chunk is not included in the file.
  541.   """
  542.   # TODO: cache results?
  543.   data = self.get_blockdata(x, z) # This may raise a RegionFileFormatError.
  544.   data = BytesIO(data)
  545.   err = None
  546.   try:
  547.    nbt = NBTFile(buffer=data)
  548.    if self.loc.x != None:
  549.     x += self.loc.x*32
  550.    if self.loc.z != None:
  551.     z += self.loc.z*32
  552.    nbt.loc = Location(x=x, z=z)
  553.    return nbt
  554.    # this may raise a MalformedFileError. Convert to ChunkDataError.
  555.   except MalformedFileError as e:
  556.    err = '%s' % e # avoid str(e) due to Unicode issues in Python 2.
  557.   if err:
  558.    raise ChunkDataError(err)

  559.     def get_chunk(self, x, z):
  560.   """
  561.   Return a NBTFile of the specified chunk.
  562.   Raise InconceivedChunk if the chunk is not included in the file.
  563.  
  564.   Note: this function may be changed later to return a Chunk() rather
  565.   than a NBTFile() object. To keep the old functionality, use get_nbt().
  566.   """
  567.   return self.get_nbt(x, z)

  568.     def write_blockdata(self, x, z, data, compression=COMPRESSION_ZLIB):
  569.   """
  570.   Compress the data, write it to file, and add pointers in the header so it
  571.   can be found as chunk(x,z).
  572.   """
  573.   if compression == COMPRESSION_GZIP:
  574.    # Python 3.1 and earlier do not yet support `data = gzip.compress(data)`.
  575.    compressed_file = BytesIO()
  576.    f = gzip.GzipFile(fileobj=compressed_file)
  577.    f.write(data)
  578.    f.close()
  579.    compressed_file.seek(0)
  580.    data = compressed_file.read()
  581.    del compressed_file
  582.   elif compression == COMPRESSION_ZLIB:
  583.    data = zlib.compress(data) # use zlib compression, rather than Gzip
  584.   elif compression != COMPRESSION_NONE:
  585.    raise ValueError("Unknown compression type %d" % compression)
  586.   length = len(data)

  587.   # 5 extra bytes are required for the chunk block header
  588.   nsectors = self._bytes_to_sector(length + 5)

  589.   if nsectors >= 256:
  590.    raise ChunkDataError("Chunk is too large (%d sectors exceeds 255 maximum)" % (nsectors))

  591.   # Ensure file has a header
  592.   if self.size < 2*SECTOR_LENGTH:
  593.    self._init_file()

  594.   # search for a place where to write the chunk:
  595.   current = self.metadata[x, z]
  596.   free_sectors = self._locate_free_sectors(ignore_chunk=current)
  597.   sector = self._find_free_location(free_sectors, nsectors, preferred=current.blockstart)

  598.   # If file is smaller than sector*SECTOR_LENGTH (it was truncated), pad it with zeroes.
  599.   if self.size < sector*SECTOR_LENGTH:
  600.    # jump to end of file
  601.    self.file.seek(0, SEEK_END)
  602.    self.file.write((sector*SECTOR_LENGTH - self.size) * b"\x00")
  603.    assert self.file.tell() == sector*SECTOR_LENGTH

  604.   # write out chunk to region
  605.   self.file.seek(sector*SECTOR_LENGTH)
  606.   self.file.write(pack(">I", length + 1)) #length field
  607.   self.file.write(pack(">B", compression)) #compression field
  608.   self.file.write(data) #compressed data

  609.   # Write zeros up to the end of the chunk
  610.   remaining_length = SECTOR_LENGTH * nsectors - length - 5
  611.   self.file.write(remaining_length * b"\x00")

  612.   #seek to header record and write offset and length records
  613.   self.file.seek(4 * (x + 32*z))
  614.   self.file.write(pack(">IB", sector, nsectors)[1:])

  615.   #write timestamp
  616.   self.file.seek(SECTOR_LENGTH + 4 * (x + 32*z))
  617.   timestamp = int(time.time())
  618.   self.file.write(pack(">I", timestamp))

  619.   # Update free_sectors with newly written block
  620.   # This is required for calculating file truncation and zeroing freed blocks.
  621.   free_sectors.extend((sector + nsectors - len(free_sectors)) * [True])
  622.   for s in range(sector, sector + nsectors):
  623.    free_sectors = False
  624.  
  625.   # Check if file should be truncated:
  626.   truncate_count = list(reversed(free_sectors)).index(False)
  627.   if truncate_count > 0:
  628.    self.size = SECTOR_LENGTH * (len(free_sectors) - truncate_count)
  629.    self.file.truncate(self.size)
  630.    free_sectors = free_sectors[:-truncate_count]
  631.  
  632.   # Calculate freed sectors
  633.   for s in range(current.blockstart, min(current.blockstart + current.blocklength, len(free_sectors))):
  634.    if free_sectors:
  635.     # zero sector s
  636.     self.file.seek(SECTOR_LENGTH*s)
  637.     self.file.write(SECTOR_LENGTH*b'\x00')
  638.  
  639.   # update file size and header information
  640.   self.size = max((sector + nsectors)*SECTOR_LENGTH, self.size)
  641.   assert self.get_size() == self.size
  642.   current.blockstart = sector
  643.   current.blocklength = nsectors
  644.   current.status = STATUS_CHUNK_OK
  645.   current.timestamp = timestamp
  646.   current.length = length + 1
  647.   current.compression = COMPRESSION_ZLIB

  648.   # self.parse_header()
  649.   # self.parse_chunk_headers()

  650.     def write_chunk(self, x, z, nbt_file):
  651.   """
  652.   Pack the NBT file as binary data, and write to file in a compressed format.
  653.   """
  654.   data = BytesIO()
  655.   nbt_file.write_file(buffer=data) # render to buffer; uncompressed
  656.   self.write_blockdata(x, z, data.getvalue())

  657.     def unlink_chunk(self, x, z):
  658.   """
  659.   Remove a chunk from the header of the region file.
  660.   Fragmentation is not a problem, chunks are written to free sectors when possible.
  661.   """
  662.   # This function fails for an empty file. If that is the case, just return.
  663.   if self.size < 2*SECTOR_LENGTH:
  664.    return

  665.   # zero the region header for the chunk (offset length and time)
  666.   self.file.seek(4 * (x + 32*z))
  667.   self.file.write(pack(">IB", 0, 0)[1:])
  668.   self.file.seek(SECTOR_LENGTH + 4 * (x + 32*z))
  669.   self.file.write(pack(">I", 0))

  670.   # Check if file should be truncated:
  671.   current = self.metadata[x, z]
  672.   free_sectors = self._locate_free_sectors(ignore_chunk=current)
  673.   truncate_count = list(reversed(free_sectors)).index(False)
  674.   if truncate_count > 0:
  675.    self.size = SECTOR_LENGTH * (len(free_sectors) - truncate_count)
  676.    self.file.truncate(self.size)
  677.    free_sectors = free_sectors[:-truncate_count]
  678.  
  679.   # Calculate freed sectors
  680.   for s in range(current.blockstart, min(current.blockstart + current.blocklength, len(free_sectors))):
  681.    if free_sectors:
  682.     # zero sector s
  683.     self.file.seek(SECTOR_LENGTH*s)
  684.     self.file.write(SECTOR_LENGTH*b'\x00')

  685.   # update the header
  686.   self.metadata[x, z] = ChunkMetadata(x, z)

  687.     def _classname(self):
  688.   """Return the fully qualified class name."""
  689.   if self.__class__.__module__ in (None,):
  690.    return self.__class__.__name__
  691.   else:
  692.    return "%s.%s" % (self.__class__.__module__, self.__class__.__name__)

  693.     def __str__(self):
  694.   if self.filename:
  695.    return "<%s(%r)>" % (self._classname(), self.filename)
  696.   else:
  697.    return '<%s object at %d>' % (self._classname(), id(self))
  698.    
  699.     def __repr__(self):
  700.   if self.filename:
  701.    return "%s(%r)" % (self._classname(), self.filename)
  702.   else:
  703.    return '<%s object at %d>' % (self._classname(), id(self))



nbt/world.py


代码:


  1. """
  2. Handles a Minecraft world save using either the Anvil or McRegion format.
  3. For more information about the world format:
  4. https://minecraft.gamepedia.com/Level_format
  5. """

  6. import os, glob, re
  7. from . import region
  8. from . import chunk
  9. from .region import InconceivedChunk, Location

  10. class UnknownWorldFormat(Exception):
  11.     """Unknown or invalid world folder."""
  12.     def __init__(self, msg=""):
  13.   self.msg = msg


  14. class _BaseWorldFolder(object):
  15.     """
  16.     Abstract class, representing either a McRegion or Anvil world folder.
  17.     This class will use either Anvil or McRegion, with Anvil the preferred format.
  18.     Simply calling WorldFolder() will do this automatically.
  19.     """
  20.     type = "Generic"
  21.     extension = ''
  22.     chunkclass = chunk.Chunk

  23.     def __init__(self, world_folder):
  24.   """Initialize a WorldFolder."""
  25.   self.worldfolder = world_folder
  26.   self.regionfiles = {}
  27.   self.regions  = {}
  28.   self.chunks= None
  29.   # os.listdir triggers an OSError for non-existant directories or permission errors.
  30.   # This is needed, because glob.glob silently returns no files.
  31.   os.listdir(world_folder)
  32.   self.set_regionfiles(self.get_filenames())

  33.     def get_filenames(self):
  34.   """Find all matching file names in the world folder.
  35.  
  36.   This method is private, and it's use it deprecated. Use get_regionfiles() instead."""
  37.   # Warning: glob returns a empty list if the directory is unreadable, without raising an Exception
  38.   return list(glob.glob(os.path.join(self.worldfolder,'region','r.*.*.'+self.extension)))

  39.     def set_regionfiles(self, filenames):
  40.   """
  41.   This method directly sets the region files for this instance to use.
  42.   It assumes the filenames are in the form r.<x-digit>.<z-digit>.<extension>
  43.   """
  44.   for filename in filenames:
  45.    # Assume that filenames have the name r.<x-digit>.<z-digit>.<extension>
  46.    m = re.match(r"r.(\-?\d+).(\-?\d+)."+self.extension, os.path.basename(filename))
  47.    if m:
  48.     x = int(m.group(1))
  49.     z = int(m.group(2))
  50.    else:
  51.     # Only raised if a .mca of .mcr file exists which does not comply to the
  52.     #r.<x-digit>.<z-digit>.<extension> filename format. This may raise false
  53.     # errors if a copy is made, e.g. "r.0.-1 copy.mca". If this is an issue, override
  54.     # get_filenames(). In most cases, it is an error, and we like to raise that.
  55.     # Changed, no longer raise error, because we want to continue the loop.
  56.     # raise UnknownWorldFormat("Unrecognized filename format %s" % os.path.basename(filename))
  57.     # TODO: log to stderr using logging facility.
  58.     pass
  59.    self.regionfiles[(x,z)] = filename

  60.     def get_regionfiles(self):
  61.   """Return a list of full path of all region files."""
  62.   return list(self.regionfiles.values())

  63.     def nonempty(self):
  64.   """Return True is the world is non-empty."""
  65.   return len(self.regionfiles) > 0

  66.     def get_region(self, x,z):
  67.   """Get a region using x,z coordinates of a region. Cache results."""
  68.   if (x,z) not in self.regions:
  69.    if (x,z) in self.regionfiles:
  70.     self.regions[(x,z)] = region.RegionFile(self.regionfiles[(x,z)])
  71.    else:
  72.     # Return an empty RegionFile object
  73.     # TODO: this does not yet allow for saving of the region file
  74.     # TODO: this currently fails with a ValueError!
  75.     # TODO: generate the correct name, and create the file
  76.     # and add the fie to self.regionfiles
  77.     self.regions[(x,z)] = region.RegionFile()
  78.    self.regions[(x,z)].loc = Location(x=x,z=z)
  79.   return self.regions[(x,z)]

  80.     def iter_regions(self):
  81.   """
  82.   Return an iterable list of all region files. Use this function if you only
  83.   want to loop through each region files once, and do not want to cache the results.
  84.   """
  85.   # TODO: Implement BoundingBox
  86.   # TODO: Implement sort order
  87.   for x,z in self.regionfiles.keys():
  88.    close_after_use = False
  89.    if (x,z) in self.regions:
  90.     regionfile = self.regions[(x,z)]
  91.    else:
  92.     # It is not yet cached.
  93.     # Get file, but do not cache later.
  94.     regionfile = region.RegionFile(self.regionfiles[(x,z)], chunkclass = self.chunkclass)
  95.     regionfile.loc = Location(x=x,z=z)
  96.     close_after_use = True
  97.    try:
  98.     yield regionfile
  99.    finally:
  100.     if close_after_use:
  101.   regionfile.close()

  102.     def call_for_each_region(self, callback_function, boundingbox=None):
  103.   """
  104.   Return an iterable that calls callback_function for each region file
  105.   in the world. This is equivalent to:
  106.   ```
  107.   for the_region in iter_regions():
  108.     yield callback_function(the_region)
  109.   ````
  110.  
  111.   This function is threaded. It uses pickle to pass values between threads.
  112.   See [What can be pickled and unpickled?](https://docs.python.org/library/pickle.html#what-can-be-pickled-and-unpickled) in the Python documentation
  113.   for limitation on the output of `callback_function()`.
  114.   """
  115.   raise NotImplementedError()

  116.     def get_nbt(self,x,z):
  117.   """
  118.   Return a NBT specified by the chunk coordinates x,z. Raise InconceivedChunk
  119.   if the NBT file is not yet generated. To get a Chunk object, use get_chunk.
  120.   """
  121.   rx,cx = divmod(x,32)
  122.   rz,cz = divmod(z,32)
  123.   if (rx,rz) not in self.regions and (rx,rz) not in self.regionfiles:
  124.    raise InconceivedChunk("Chunk %s,%s is not present in world" % (x,z))
  125.   nbt = self.get_region(rx,rz).get_nbt(cx,cz)
  126.   assert nbt != None
  127.   return nbt

  128.     def set_nbt(self,x,z,nbt):
  129.   """
  130.   Set a chunk. Overrides the NBT if it already existed. If the NBT did not exists,
  131.   adds it to the Regionfile. May create a new Regionfile if that did not exist yet.
  132.   nbt must be a nbt.NBTFile instance, not a Chunk or regular TAG_Compound object.
  133.   """
  134.   raise NotImplementedError()
  135.   # TODO: implement

  136.     def iter_nbt(self):
  137.   """
  138.   Return an iterable list of all NBT. Use this function if you only
  139.   want to loop through the chunks once, and don't need the block or data arrays.
  140.   """
  141.   # TODO: Implement BoundingBox
  142.   # TODO: Implement sort order
  143.   for region in self.iter_regions():
  144.    for c in region.iter_chunks():
  145.     yield c

  146.     def call_for_each_nbt(self, callback_function, boundingbox=None):
  147.   """
  148.   Return an iterable that calls callback_function for each NBT structure
  149.   in the world. This is equivalent to:
  150.   ```
  151.   for the_nbt in iter_nbt():
  152.     yield callback_function(the_nbt)
  153.   ````
  154.  
  155.   This function is threaded. It uses pickle to pass values between threads.
  156.   See [What can be pickled and unpickled?](https://docs.python.org/library/pickle.html#what-can-be-pickled-and-unpickled) in the Python documentation
  157.   for limitation on the output of `callback_function()`.
  158.   """
  159.   raise NotImplementedError()

  160.     def get_chunk(self,x,z):
  161.   """
  162.   Return a chunk specified by the chunk coordinates x,z. Raise InconceivedChunk
  163.   if the chunk is not yet generated. To get the raw NBT data, use get_nbt.
  164.   """
  165.   return self.chunkclass(self.get_nbt(x, z))

  166.     def get_chunks(self, boundingbox=None):
  167.   """
  168.   Return a list of all chunks. Use this function if you access the chunk
  169.   list frequently and want to cache the result.
  170.   Use iter_chunks() if you only want to loop through the chunks once or have a
  171.   very large world.
  172.   """
  173.   if self.chunks == None:
  174.    self.chunks = list(self.iter_chunks())
  175.   return self.chunks

  176.     def iter_chunks(self):
  177.   """
  178.   Return an iterable list of all chunks. Use this function if you only
  179.   want to loop through the chunks once or have a very large world.
  180.   Use get_chunks() if you access the chunk list frequently and want to cache
  181.   the results. Use iter_nbt() if you are concerned about speed and don't want
  182.   to parse the block data.
  183.   """
  184.   # TODO: Implement BoundingBox
  185.   # TODO: Implement sort order
  186.   for c in self.iter_nbt():
  187.    yield self.chunkclass(c)

  188.     def chunk_count(self):
  189.   """Return a count of the chunks in this world folder."""
  190.   c = 0
  191.   for r in self.iter_regions():
  192.    c += r.chunk_count()
  193.   return c

  194.     def get_boundingbox(self):
  195.   """
  196.   Return minimum and maximum x and z coordinates of the chunks that
  197.   make up this world save
  198.   """
  199.   b = BoundingBox()
  200.   for rx,rz in self.regionfiles.keys():
  201.    region = self.get_region(rx,rz)
  202.    rx,rz = 32*rx,32*rz
  203.    for cc in region.get_chunk_coords():
  204.     x,z = (rx+cc['x'],rz+cc['z'])
  205.     b.expand(x,None,z)
  206.   return b

  207.     def __repr__(self):
  208.   return "%s(%r)" % (self.__class__.__name__,self.worldfolder)


  209. class McRegionWorldFolder(_BaseWorldFolder):
  210.     """Represents a world save using the old McRegion format."""
  211.     type = "McRegion"
  212.     extension = 'mcr'
  213.     chunkclass = chunk.McRegionChunk


  214. class AnvilWorldFolder(_BaseWorldFolder):
  215.     """Represents a world save using the new Anvil format."""
  216.     type = "Anvil"
  217.     extension = 'mca'
  218.     chunkclass = chunk.AnvilChunk


  219. class _WorldFolderFactory(object):
  220.     """Factory class: instantiate the subclassses in order, and the first instance
  221.     whose nonempty() method returns True is returned. If no nonempty() returns True,
  222.     a UnknownWorldFormat exception is raised."""
  223.     def __init__(self, subclasses):
  224.   self.subclasses = subclasses
  225.     def __call__(self, *args, **kwargs):
  226.   for cls in self.subclasses:
  227.    wf = cls(*args, **kwargs)
  228.    if wf.nonempty(): # Check if the world is non-empty
  229.     return wf
  230.   raise UnknownWorldFormat("Empty world or unknown format")

  231. WorldFolder = _WorldFolderFactory([AnvilWorldFolder, McRegionWorldFolder])
  232. """
  233. Factory instance that returns a AnvilWorldFolder or McRegionWorldFolder
  234. instance, or raise a UnknownWorldFormat.
  235. """



  236. class BoundingBox(object):
  237.     """A bounding box of x,y,z coordinates."""
  238.     def __init__(self, minx=None, maxx=None, miny=None, maxy=None, minz=None, maxz=None):
  239.   self.minx,self.maxx = minx, maxx
  240.   self.miny,self.maxy = miny, maxy
  241.   self.minz,self.maxz = minz, maxz
  242.     def expand(self,x,y,z):
  243.   """
  244.   Expands the bounding
  245.   """
  246.   if x != None:
  247.    if self.minx is None or x < self.minx:
  248.     self.minx = x
  249.    if self.maxx is None or x > self.maxx:
  250.     self.maxx = x
  251.   if y != None:
  252.    if self.miny is None or y < self.miny:
  253.     self.miny = y
  254.    if self.maxy is None or y > self.maxy:
  255.     self.maxy = y
  256.   if z != None:
  257.    if self.minz is None or z < self.minz:
  258.     self.minz = z
  259.    if self.maxz is None or z > self.maxz:
  260.     self.maxz = z
  261.     def lenx(self):
  262.   if self.maxx is None or self.minx is None:
  263.    return 0
  264.   return self.maxx-self.minx+1
  265.     def leny(self):
  266.   if self.maxy is None or self.miny is None:
  267.    return 0
  268.   return self.maxy-self.miny+1
  269.     def lenz(self):
  270.   if self.maxz is None or self.minz is None:
  271.    return 0
  272.   return self.maxz-self.minz+1
  273.     def __repr__(self):
  274.   return "%s(%s,%s,%s,%s,%s,%s)" % (self.__class__.__name__,self.minx,self.maxx,
  275.     self.miny,self.maxy,self.minz,self.maxz)



/setup.py

代码:

  1. #!/usr/bin/env python

  2. from setuptools import setup
  3. from nbt import VERSION

  4. setup(
  5. name    = 'NBT',
  6. version    = ".".join(str(x) for x in VERSION),
  7. description   = 'Named Binary Tag Reader/Writer',
  8. author  = 'Thomas Woolford',
  9. author_email  = '[email protected]',
  10. url  = 'http://github.com/twoolie/NBT',
  11. license    = open("LICENSE.txt").read(),
  12. long_description = open("README.txt").read(),
  13. packages   = ['nbt'],
  14. classifiers   = [
  15.   "Development Status :: 5 - Production/Stable",
  16.   "Intended Audience :: Developers",
  17.   "License :: OSI Approved :: MIT License",
  18.   "Operating System :: OS Independent",
  19.   "Programming Language :: Python",
  20.   "Programming Language :: Python :: 2.7",
  21.   "Programming Language :: Python :: 3.3",
  22.   "Programming Language :: Python :: 3.4",
  23.   "Programming Language :: Python :: 3.5",
  24.   "Programming Language :: Python :: 3.6",
  25.   "Topic :: Games/Entertainment",
  26.   "Topic :: Software Development :: Libraries :: Python Modules"
  27. ]
  28. )