python3 是否有方法可以读取并修改例如原版末地船的 nbt 或者 command storage 等文件?
有啊有啊
一个编程语言怎么可能读写不了文件呢
按照字节读自己解析即可了
https://wiki.biligame.com/mc/NBT%E6%A0%BC%E5%BC%8F
代码思路可以参考
https://www.mcbbs.net/thread-1014198-272076-1.html
这边帮忙折叠了只解析了int和{}的nbt代码
我相信 简单 的 python 代码 会更少
py:
open(...)
read(...)
我相信楼主肯定会python 不至于基础代码还需要教
一个编程语言怎么可能读写不了文件呢
按照字节读自己解析即可了
https://wiki.biligame.com/mc/NBT%E6%A0%BC%E5%BC%8F
代码思路可以参考
https://www.mcbbs.net/thread-1014198-272076-1.html
这边帮忙折叠了只解析了int和{}的nbt代码
代码:
- private static String getYYS(int depth) {
 
-    StringBuilder sb = new StringBuilder();
 
-    for (int i = 0; i < depth; ++i) {
 
-       sb.append("    ");
 
-    }
 
-    return sb.toString();
 
-     }
 
-     private static void read(DataInputStream in, int depth) throws IOException {
 
-    boolean next = true;
 
-    while (next) {
 
-       byte tag = in.readByte();
 
-       next = tag != 0;
 
-       if (next) {
 
-         short nameLength = in.readShort();
 
-         if (nameLength != 0) {
 
-        byte[] name = new byte[nameLength];
 
-        if (in.read(name) != name.length) {
 
-           throw new IOException("ljyys for name error");
 
-        }
 
-        String tagName = new String(name);
 
-        System.out.println(getYYS(depth) + tagName + " {");
 
-         } else {
 
-        System.out.println(getYYS(depth) + "{");
 
-         }
 
-         switch (tag) {
 
-        case 0x3: {
 
-           System.out.println(getYYS(depth + 1) + in.readInt());
 
-           System.out.println(getYYS(depth) + "}");
 
-           break;
 
-        }
 
-        case 0xA: {
 
-           read(in, depth + 1);
 
-           next = in.available() > 0;
 
-           break;
 
-        }
 
-        default: {
 
-           System.out.println("data left: " + in.available());
 
-           throw new IOException("ljyys for tag:" + tag);
 
-        }
 
-         }
 
-       }
 
-    }
 
-    System.out.println(getYYS(depth) + "}");
 
-     }
 
-     public static void main(String[] args) throws Throwable {
 
-    DataInputStream in = new DataInputStream(Files.newInputStream(Paths.get("command_storage_minecraft")));
 
-    read(in, 0);
 
- }
py:
open(...)
read(...)
阴阳师元素祭祀 发表于 2020-4-11 21:52
有啊有啊
一个编程语言怎么可能读写不了文件呢
https://wiki.biligame.com/mc/NBT%E6%A0%BC%E5%BC%8F
有啊有啊
一个编程语言怎么可能读写不了文件呢
https://wiki.biligame.com/mc/NBT%E6%A0%BC%E5%BC%8F
我说的是 python 而你给的是 java(

(=°ω°)丿 发表于 2020-4-11 21:53
我说的是 python 而你给的是 java(
我说的是 python 而你给的是 java(
你需要
@箱子 的好东西
[搬运+翻译][从零学编程]Python3Ⅳ:异常 & 文件
https://www.mcbbs.net/thread-990257-1-1.html
(出处: Minecraft(我的世界)中文论坛)
(=°ω°)丿 发表于 2020-4-11 22:24
那你就帮我把 https://pca006132.neocities.org/tutorials/nbt/format.html 里的 python nbt 库下载下来 ...
那你就帮我把 https://pca006132.neocities.org/tutorials/nbt/format.html 里的 python nbt 库下载下来 ...
因为不会下载
所以..请原谅我这样发文件
https://github.com/twoolie/NBT
是重要的协议:
 Copyright (c) 2010-2013 Thomas Woolford and contributors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
nbt/__init__.py
代码:
 
- __all__ = ["nbt", "world", "region", "chunk"]
 
- from . import *
 
 
- # Documentation only automatically includes functions specified in __all__.
 
- # If you add more functions, please manually include them in doc/index.rst.
 
 
- VERSION = (1, 5, 0)
 
- """NBT version as tuple. Note that the major and minor revision number are 
 
- always present, but the patch identifier (the 3rd number) is only used in 1.4."""
 
 
- def _get_version():
 
-     """Return the NBT version as string."""
 
-     return ".".join([str(v) for v in VERSION])
 
nbt/chunk.py
代码:
 
- """
 
- Handles a single chunk of data (16x16x128 blocks) from a Minecraft save.
 
- For more information about the chunck format:
 
- https://minecraft.gamepedia.com/Chunk_format
 
- """
 
 
- from io import BytesIO
 
- from struct import pack
 
- import array
 
- import nbt
 
 
 
- # Legacy numeric block identifiers
 
- # mapped to alpha identifiers in best effort
 
- # See https://minecraft.gamepedia.com/Java_Edition_data_values/Pre-flattening
 
- # TODO: move this map into a separate file
 
 
- block_ids = {
 
-     0: 'air',
 
-     1: 'stone',
 
-     2: 'grass_block',
 
-     3: 'dirt',
 
-     4: 'cobblestone',
 
-     5: 'oak_planks',
 
-     6: 'sapling',
 
-     7: 'bedrock',
 
-     8: 'flowing_water',
 
-     9: 'water',
 
-   10: 'flowing_lava',
 
-   11: 'lava',
 
-   12: 'sand',
 
-   13: 'gravel',
 
-   14: 'gold_ore',
 
-   15: 'iron_ore',
 
-   16: 'coal_ore',
 
-   17: 'oak_log',
 
-   18: 'oak_leaves',
 
-   19: 'sponge',
 
-   20: 'glass',
 
-   21: 'lapis_ore',
 
-   24: 'sandstone',
 
-   30: 'cobweb',
 
-   31: 'grass',
 
-   32: 'dead_bush',
 
-   35: 'white_wool',
 
-   37: 'dandelion',
 
-   38: 'poppy',
 
-   39: 'brown_mushroom',
 
-   40: 'red_mushroom',
 
-   43: 'stone_slab',
 
-   44: 'stone_slab',
 
-   47: 'bookshelf',
 
-   48: 'mossy_cobblestone',
 
-   49: 'obsidian',
 
-   50: 'torch',
 
-   51: 'fire',
 
-   52: 'spawner',
 
-   53: 'oak_stairs',
 
-   54: 'chest',
 
-   56: 'diamond_ore',
 
-   58: 'crafting_table',
 
-   59: 'wheat',
 
-   60: 'farmland',
 
-   61: 'furnace',
 
-   62: 'furnace',
 
-   63: 'sign',# will change to oak_sign in 1.14
 
-   64: 'oak_door',
 
-   65: 'ladder',
 
-   66: 'rail',
 
-   67: 'cobblestone_stairs',
 
-   72: 'oak_pressure_plate',
 
-   73: 'redstone_ore',
 
-   74: 'redstone_ore',
 
-   78: 'snow',
 
-   79: 'ice',
 
-   81: 'cactus',
 
-   82: 'clay',
 
-   83: 'sugar_cane',
 
-   85: 'oak_fence',
 
-   86: 'pumpkin',
 
-   91: 'lit_pumpkin',
 
-     101: 'iron_bars',
 
-     102: 'glass_pane',
 
-     }
 
 
 
- def block_id_to_name(bid):
 
-     try:
 
-    name = block_ids[bid]
 
-     except KeyError:
 
-    name = 'unknown_%d' % (bid,)
 
-    print("warning: unknown block id %i" % bid)
 
-    print("hint: add that block to the 'block_ids' map")
 
-     return name
 
 
 
- # Generic Chunk
 
 
- class Chunk(object):
 
-     """Class for representing a single chunk."""
 
-     def __init__(self, nbt):
 
-    self.chunk_data = nbt['Level']
 
-    self.coords = self.chunk_data['xPos'],self.chunk_data['zPos']
 
 
-     def get_coords(self):
 
-    """Return the coordinates of this chunk."""
 
-    return (self.coords[0].value,self.coords[1].value)
 
 
-     def __repr__(self):
 
-    """Return a representation of this Chunk."""
 
-    return "Chunk("+str(self.coords[0])+","+str(self.coords[1])+")"
 
 
 
- # Chunk in Region old format
 
 
- class McRegionChunk(Chunk):
 
 
-     def __init__(self, nbt):
 
-    Chunk.__init__(self, nbt)
 
-    self.blocks = BlockArray(self.chunk_data['Blocks'].value, self.chunk_data['Data'].value)
 
 
-     def get_max_height(self):
 
-    return 127
 
 
-     def get_block(self, x, y, z):
 
-    name = block_id_to_name(self.blocks.get_block(x, y, z))
 
-    return name
 
 
-     def iter_block(self):
 
-    for y in range(0, 128):
 
-       for z in range(0, 16):
 
-         for x in range(0, 16):
 
-        yield self.get_block(x, y, z)
 
 
 
- # Section in Anvil new format
 
 
- class AnvilSection(object):
 
 
-     def __init__(self, nbt, version):
 
-    self.names = []
 
-    self.indexes = []
 
 
-    # Is the section flattened ?
 
-    # See https://minecraft.gamepedia.com/1.13/Flattening
 
 
-    if version == 0 or version == 1343:# 1343 = MC 1.12.2
 
-       self._init_array(nbt)
 
-    elif version == 1631:# MC 1.13
 
-       self._init_index(nbt)
 
-    else:
 
-       raise NotImplementedError()
 
 
-    # Section contains 4096 blocks whatever data version
 
 
-    assert len(self.indexes) == 4096
 
 
 
-     # Decode legacy section
 
-     # Contains an array of block numeric identifiers
 
 
-     def _init_array(self, nbt):
 
-    bids = []
 
-    for bid in nbt['Blocks'].value:
 
-       try:
 
-         i = bids.index(bid)
 
-       except ValueError:
 
-         bids.append(bid)
 
-         i = len(bids) - 1
 
-       self.indexes.append(i)
 
 
-    for bid in bids:
 
-       bname = block_id_to_name(bid)
 
-       self.names.append(bname)
 
 
 
-     # Decode modern section
 
-     # Contains palette of block names and indexes
 
 
-     def _init_index(self, nbt):
 
 
-    for p in nbt['Palette']:
 
-       name = p['Name'].value
 
-       if name.startswith('minecraft:'):
 
-         name = name[10:]
 
-       self.names.append(name)
 
 
-    states = nbt['BlockStates'].value
 
 
-    # Block states are packed into an array of longs
 
-    # with variable number of bits per block (min: 4)
 
 
-    nb = (len(self.names) - 1).bit_length()
 
-    if nb < 4: nb = 4
 
-    assert nb == len(states) * 8 * 8 / 4096
 
-    m = pow(2, nb) - 1
 
 
-    j = 0
 
-    bl = 64
 
-    ll = states[0]
 
 
-    for i in range(0,4096):
 
-       if bl == 0:
 
-         j = j + 1
 
-         ll = states[j]
 
-         bl = 64
 
 
-       if nb <= bl:
 
-         self.indexes.append(ll & m)
 
-         ll = ll >> nb
 
-         bl = bl - nb
 
-       else:
 
-         j = j + 1
 
-         lh = states[j]
 
-         bh = nb - bl
 
 
-         lh = (lh & (pow(2, bh) - 1)) << bl
 
-         ll = (ll & (pow(2, bl) - 1))
 
-         self.indexes.append(lh | ll)
 
 
-         ll = states[j]
 
-         ll = ll >> bh
 
-         bl = 64 - bh
 
 
 
-     def get_block(self, x, y, z):
 
-    # Blocks are stored in YZX order
 
-    i = y * 256 + z * 16 + x
 
-    p = self.indexes
 
-    return self.names[p]
 
 
 
-     def iter_block(self):
 
-    for i in range(0, 4096):
 
-       p = self.indexes
 
-       yield self.names[p]
 
 
 
- # Chunck in Anvil new format
 
 
- class AnvilChunk(Chunk):
 
 
-     def __init__(self, nbt):
 
-    Chunk.__init__(self, nbt)
 
 
-    # Started to work on this class with MC version 1.13.2
 
-    # so with the chunk data version 1631
 
-    # Backported to first Anvil version (= 0) from examples
 
-    # Could work with other versions, but has to be tested first
 
 
-    try:
 
-       version = nbt['DataVersion'].value
 
-       if version != 1343 and version != 1631:
 
-         raise NotImplementedError('DataVersion %d not implemented' % (version,))
 
-    except KeyError:
 
-       version = 0
 
 
-    # Load all sections
 
 
-    self.sections = {}
 
-    if 'Sections' in self.chunk_data:
 
-       for s in self.chunk_data['Sections']:
 
-         self.sections[s['Y'].value] = AnvilSection(s, version)
 
 
 
-     def get_section(self, y):
 
-    """Get a section from Y index."""
 
-    if y in self.sections:
 
-       return self.sections[y]
 
 
-    return None
 
 
 
-     def get_max_height(self):
 
-    ymax = 0
 
-    for y in self.sections.keys():
 
-       if y > ymax: ymax = y
 
-    return ymax * 16 + 15
 
 
 
-     def get_block(self, x, y, z):
 
-    """Get a block from relative x,y,z."""
 
-    sy,by = divmod(y, 16)
 
-    section = self.get_section(sy)
 
-    if section == None:
 
-       return None
 
 
-    return section.get_block(x, by, z)
 
 
 
-     def iter_block(self):
 
-    for s in self.sections.values():
 
-       for b in s.iter_block():
 
-         yield b
 
 
 
- class BlockArray(object):
 
-     """Convenience class for dealing with a Block/data byte array."""
 
-     def __init__(self, blocksBytes=None, dataBytes=None):
 
-    """Create a new BlockArray, defaulting to no block or data bytes."""
 
-    if isinstance(blocksBytes, (bytearray, array.array)):
 
-       self.blocksList = list(blocksBytes)
 
-    else:
 
-       self.blocksList = [0]*32768 # Create an empty block list (32768 entries of zero (air))
 
 
-    if isinstance(dataBytes, (bytearray, array.array)):
 
-       self.dataList = list(dataBytes)
 
-    else:
 
-       self.dataList = [0]*16384 # Create an empty data list (32768 4-bit entries of zero make 16384 byte entries)
 
 
-     def get_blocks_struct(self):
 
-    """Return a dictionary with block ids keyed to (x, y, z)."""
 
-    cur_x = 0
 
-    cur_y = 0
 
-    cur_z = 0
 
-    blocks = {}
 
-    for block_id in self.blocksList:
 
-       blocks[(cur_x,cur_y,cur_z)] = block_id
 
-       cur_y += 1
 
-       if (cur_y > 127):
 
-         cur_y = 0
 
-         cur_z += 1
 
-         if (cur_z > 15):
 
-        cur_z = 0
 
-        cur_x += 1
 
-    return blocks
 
 
-     # Give blockList back as a byte array
 
-     def get_blocks_byte_array(self, buffer=False):
 
-    """Return a list of all blocks in this chunk."""
 
-    if buffer:
 
-       length = len(self.blocksList)
 
-       return BytesIO(pack(">i", length)+self.get_blocks_byte_array())
 
-    else:
 
-       return array.array('B', self.blocksList).tostring()
 
 
-     def get_data_byte_array(self, buffer=False):
 
-    """Return a list of data for all blocks in this chunk."""
 
-    if buffer:
 
-       length = len(self.dataList)
 
-       return BytesIO(pack(">i", length)+self.get_data_byte_array())
 
-    else:
 
-       return array.array('B', self.dataList).tostring()
 
 
-     def generate_heightmap(self, buffer=False, as_array=False):
 
-    """Return a heightmap, representing the highest solid blocks in this chunk."""
 
-    non_solids = [0, 8, 9, 10, 11, 38, 37, 32, 31]
 
-    if buffer:
 
-       return BytesIO(pack(">i", 256)+self.generate_heightmap()) # Length + Heightmap, ready for insertion into Chunk NBT
 
-    else:
 
-       bytes = []
 
-       for z in range(16):
 
-         for x in range(16):
 
-        for y in range(127, -1, -1):
 
-           offset = y + z*128 + x*128*16
 
-           if (self.blocksList[offset] not in non_solids or y == 0):
 
-             bytes.append(y+1)
 
-             break
 
-       if (as_array):
 
-         return bytes
 
-       else:
 
-         return array.array('B', bytes).tostring()
 
 
-     def set_blocks(self, list=None, dict=None, fill_air=False):
 
-    """
 
-    Sets all blocks in this chunk, using either a list or dictionary.
 
-    Blocks not explicitly set can be filled to air by setting fill_air to True.
 
-    """
 
-    if list:
 
-       # Inputting a list like self.blocksList
 
-       self.blocksList = list
 
-    elif dict:
 
-       # Inputting a dictionary like result of self.get_blocks_struct()
 
-       list = []
 
-       for x in range(16):
 
-         for z in range(16):
 
-        for y in range(128):
 
-           coord = x,y,z
 
-           offset = y + z*128 + x*128*16
 
-           if (coord in dict):
 
-             list.append(dict[coord])
 
-           else:
 
-             if (self.blocksList[offset] and not fill_air):
 
-            list.append(self.blocksList[offset])
 
-             else:
 
-            list.append(0) # Air
 
-       self.blocksList = list
 
-    else:
 
-       # None of the above...
 
-       return False
 
-    return True
 
 
-     def set_block(self, x,y,z, id, data=0):
 
-    """Sets the block a x, y, z to the specified id, and optionally data."""
 
-    offset = y + z*128 + x*128*16
 
-    self.blocksList[offset] = id
 
-    if (offset % 2 == 1):
 
-       # offset is odd
 
-       index = (offset-1)//2
 
-       b = self.dataList[index]
 
-       self.dataList[index] = (b & 240) + (data & 15) # modify lower bits, leaving higher bits in place
 
-    else:
 
-       # offset is even
 
-       index = offset//2
 
-       b = self.dataList[index]
 
-       self.dataList[index] = (b & 15) + (data << 4 & 240) # modify ligher bits, leaving lower bits in place
 
 
-     # Get a given X,Y,Z or a tuple of three coordinates
 
-     def get_block(self, x,y,z, coord=False):
 
-    """Return the id of the block at x, y, z."""
 
-    """
 
-    Laid out like:
 
-    (0,0,0), (0,1,0), (0,2,0) ... (0,127,0), (0,0,1), (0,1,1), (0,2,1) ... (0,127,1), (0,0,2) ... (0,127,15), (1,0,0), (1,1,0) ... (15,127,15)
 
-    
 
-    ::
 
-    
 
-       blocks = []
 
-       for x in range(15):
 
-       for z in range(15):
 
-      for y in range(127):
 
-         blocks.append(Block(x,y,z))
 
-    """
 
 
-    offset = y + z*128 + x*128*16 if (coord == False) else coord[1] + coord[2]*128 + coord[0]*128*16
 
-    return self.blocksList[offset]
 
nbt/nbt.py
代码:
 
- """
 
- Handle the NBT (Named Binary Tag) data format
 
- For more information about the NBT format:
 
- https://minecraft.gamepedia.com/NBT_format
 
- """
 
 
- from struct import Struct, error as StructError
 
- from gzip import GzipFile
 
- from collections import MutableMapping, MutableSequence, Sequence
 
- import sys
 
 
- _PY3 = sys.version_info >= (3,)
 
- if _PY3:
 
-     unicode = str
 
-     basestring = str
 
- else:
 
-     range = xrange
 
 
- TAG_END = 0
 
- TAG_BYTE = 1
 
- TAG_SHORT = 2
 
- TAG_INT = 3
 
- TAG_LONG = 4
 
- TAG_FLOAT = 5
 
- TAG_DOUBLE = 6
 
- TAG_BYTE_ARRAY = 7
 
- TAG_STRING = 8
 
- TAG_LIST = 9
 
- TAG_COMPOUND = 10
 
- TAG_INT_ARRAY = 11
 
- TAG_LONG_ARRAY = 12
 
 
 
- class MalformedFileError(Exception):
 
-     """Exception raised on parse error."""
 
-     pass
 
 
 
- class TAG(object):
 
-     """TAG, a variable with an intrinsic name."""
 
-     id = None
 
 
-     def __init__(self, value=None, name=None):
 
-    self.name = name
 
-    self.value = value
 
 
-     # Parsers and Generators
 
-     def _parse_buffer(self, buffer):
 
-    raise NotImplementedError(self.__class__.__name__)
 
 
-     def _render_buffer(self, buffer):
 
-    raise NotImplementedError(self.__class__.__name__)
 
 
-     # Printing and Formatting of tree
 
-     def tag_info(self):
 
-    """Return Unicode string with class, name and unnested value."""
 
-    return self.__class__.__name__ + (
 
-       '(%r)' % self.name if self.name
 
-       else "") + ": " + self.valuestr()
 
 
-     def valuestr(self):
 
-    """Return Unicode string of unnested value. For iterators, this
 
-    returns a summary."""
 
-    return unicode(self.value)
 
 
-     def pretty_tree(self, indent=0):
 
-    """Return formated Unicode string of self, where iterable items are
 
-    recursively listed in detail."""
 
-    return ("\t" * indent) + self.tag_info()
 
 
-     # Python 2 compatibility; Python 3 uses __str__ instead.
 
-     def __unicode__(self):
 
-    """Return a unicode string with the result in human readable format.
 
-    Unlike valuestr(), the result is recursive for iterators till at least
 
-    one level deep."""
 
-    return unicode(self.value)
 
 
-     def __str__(self):
 
-    """Return a string (ascii formated for Python 2, unicode for Python 3)
 
-    with the result in human readable format. Unlike valuestr(), the result
 
-      is recursive for iterators till at least one level deep."""
 
-    return str(self.value)
 
 
-     # Unlike regular iterators, __repr__() is not recursive.
 
-     # Use pretty_tree for recursive results.
 
-     # iterators should use __repr__ or tag_info for each item, like
 
-     #regular iterators
 
-     def __repr__(self):
 
-    """Return a string (ascii formated for Python 2, unicode for Python 3)
 
-    describing the class, name and id for debugging purposes."""
 
-    return "<%s(%r) at 0x%x>" % (
 
-       self.__class__.__name__, self.name, id(self))
 
 
 
- class _TAG_Numeric(TAG):
 
-     """_TAG_Numeric, comparable to int with an intrinsic name"""
 
 
-     def __init__(self, value=None, name=None, buffer=None):
 
-    super(_TAG_Numeric, self).__init__(value, name)
 
-    if buffer:
 
-       self._parse_buffer(buffer)
 
 
-     # Parsers and Generators
 
-     def _parse_buffer(self, buffer):
 
-    # Note: buffer.read() may raise an IOError, for example if buffer is a
 
-    # corrupt gzip.GzipFile
 
-    self.value = self.fmt.unpack(buffer.read(self.fmt.size))[0]
 
 
-     def _render_buffer(self, buffer):
 
-    buffer.write(self.fmt.pack(self.value))
 
 
 
- class _TAG_End(TAG):
 
-     id = TAG_END
 
-     fmt = Struct(">b")
 
 
-     def _parse_buffer(self, buffer):
 
-    # Note: buffer.read() may raise an IOError, for example if buffer is a
 
-    # corrupt gzip.GzipFile
 
-    value = self.fmt.unpack(buffer.read(1))[0]
 
-    if value != 0:
 
-       raise ValueError(
 
-         "A Tag End must be rendered as '0', not as '%d'." % value)
 
 
-     def _render_buffer(self, buffer):
 
-    buffer.write(b'\x00')
 
 
 
- # == Value Tags ==#
 
- class TAG_Byte(_TAG_Numeric):
 
-     """Represent a single tag storing 1 byte."""
 
-     id = TAG_BYTE
 
-     fmt = Struct(">b")
 
 
 
- class TAG_Short(_TAG_Numeric):
 
-     """Represent a single tag storing 1 short."""
 
-     id = TAG_SHORT
 
-     fmt = Struct(">h")
 
 
 
- class TAG_Int(_TAG_Numeric):
 
-     """Represent a single tag storing 1 int."""
 
-     id = TAG_INT
 
-     fmt = Struct(">i")
 
-     """Struct(">i"), 32-bits integer, big-endian"""
 
 
 
- class TAG_Long(_TAG_Numeric):
 
-     """Represent a single tag storing 1 long."""
 
-     id = TAG_LONG
 
-     fmt = Struct(">q")
 
 
 
- class TAG_Float(_TAG_Numeric):
 
-     """Represent a single tag storing 1 IEEE-754 floating point number."""
 
-     id = TAG_FLOAT
 
-     fmt = Struct(">f")
 
 
 
- class TAG_Double(_TAG_Numeric):
 
-     """Represent a single tag storing 1 IEEE-754 double precision floating
 
-     point number."""
 
-     id = TAG_DOUBLE
 
-     fmt = Struct(">d")
 
 
 
- class TAG_Byte_Array(TAG, MutableSequence):
 
-     """
 
-     TAG_Byte_Array, comparable to a collections.UserList with
 
-     an intrinsic name whose values must be bytes
 
-     """
 
-     id = TAG_BYTE_ARRAY
 
 
-     def __init__(self, name=None, buffer=None):
 
-    # TODO: add a value parameter as well
 
-    super(TAG_Byte_Array, self).__init__(name=name)
 
-    if buffer:
 
-       self._parse_buffer(buffer)
 
 
-     # Parsers and Generators
 
-     def _parse_buffer(self, buffer):
 
-    length = TAG_Int(buffer=buffer)
 
-    self.value = bytearray(buffer.read(length.value))
 
 
-     def _render_buffer(self, buffer):
 
-    length = TAG_Int(len(self.value))
 
-    length._render_buffer(buffer)
 
-    buffer.write(bytes(self.value))
 
 
-     # Mixin methods
 
-     def __len__(self):
 
-    return len(self.value)
 
 
-     def __iter__(self):
 
-    return iter(self.value)
 
 
-     def __contains__(self, item):
 
-    return item in self.value
 
 
-     def __getitem__(self, key):
 
-    return self.value[key]
 
 
-     def __setitem__(self, key, value):
 
-    # TODO: check type of value
 
-    self.value[key] = value
 
 
-     def __delitem__(self, key):
 
-    del (self.value[key])
 
 
-     def insert(self, key, value):
 
-    # TODO: check type of value, or is this done by self.value already?
 
-    self.value.insert(key, value)
 
 
-     # Printing and Formatting of tree
 
-     def valuestr(self):
 
-    return "[%i byte(s)]" % len(self.value)
 
 
-     def __unicode__(self):
 
-    return '[' + ",".join([unicode(x) for x in self.value]) + ']'
 
 
-     def __str__(self):
 
-    return '[' + ",".join([str(x) for x in self.value]) + ']'
 
 
 
- class TAG_Int_Array(TAG, MutableSequence):
 
-     """
 
-     TAG_Int_Array, comparable to a collections.UserList with
 
-     an intrinsic name whose values must be integers
 
-     """
 
-     id = TAG_INT_ARRAY
 
 
-     def __init__(self, name=None, buffer=None):
 
-    # TODO: add a value parameter as well
 
-    super(TAG_Int_Array, self).__init__(name=name)
 
-    if buffer:
 
-       self._parse_buffer(buffer)
 
 
-     def update_fmt(self, length):
 
-    """ Adjust struct format description to length given """
 
-    self.fmt = Struct(">" + str(length) + "i")
 
 
-     # Parsers and Generators
 
-     def _parse_buffer(self, buffer):
 
-    length = TAG_Int(buffer=buffer).value
 
-    self.update_fmt(length)
 
-    self.value = list(self.fmt.unpack(buffer.read(self.fmt.size)))
 
 
-     def _render_buffer(self, buffer):
 
-    length = len(self.value)
 
-    self.update_fmt(length)
 
-    TAG_Int(length)._render_buffer(buffer)
 
-    buffer.write(self.fmt.pack(*self.value))
 
 
-     # Mixin methods
 
-     def __len__(self):
 
-    return len(self.value)
 
 
-     def __iter__(self):
 
-    return iter(self.value)
 
 
-     def __contains__(self, item):
 
-    return item in self.value
 
 
-     def __getitem__(self, key):
 
-    return self.value[key]
 
 
-     def __setitem__(self, key, value):
 
-    self.value[key] = value
 
 
-     def __delitem__(self, key):
 
-    del (self.value[key])
 
 
-     def insert(self, key, value):
 
-    self.value.insert(key, value)
 
 
-     # Printing and Formatting of tree
 
-     def valuestr(self):
 
-    return "[%i int(s)]" % len(self.value)
 
 
 
- class TAG_Long_Array(TAG, MutableSequence):
 
-     """
 
-     TAG_Long_Array, comparable to a collections.UserList with
 
-     an intrinsic name whose values must be integers
 
-     """
 
-     id = TAG_LONG_ARRAY
 
 
-     def __init__(self, name=None, buffer=None):
 
-    super(TAG_Long_Array, self).__init__(name=name)
 
-    if buffer:
 
-       self._parse_buffer(buffer)
 
 
-     def update_fmt(self, length):
 
-    """ Adjust struct format description to length given """
 
-    self.fmt = Struct(">" + str(length) + "q")
 
 
-     # Parsers and Generators
 
-     def _parse_buffer(self, buffer):
 
-    length = TAG_Int(buffer=buffer).value
 
-    self.update_fmt(length)
 
-    self.value = list(self.fmt.unpack(buffer.read(self.fmt.size)))
 
 
-     def _render_buffer(self, buffer):
 
-    length = len(self.value)
 
-    self.update_fmt(length)
 
-    TAG_Int(length)._render_buffer(buffer)
 
-    buffer.write(self.fmt.pack(*self.value))
 
 
-     # Mixin methods
 
-     def __len__(self):
 
-    return len(self.value)
 
 
-     def __iter__(self):
 
-    return iter(self.value)
 
 
-     def __contains__(self, item):
 
-    return item in self.value
 
 
-     def __getitem__(self, key):
 
-    return self.value[key]
 
 
-     def __setitem__(self, key, value):
 
-    self.value[key] = value
 
 
-     def __delitem__(self, key):
 
-    del (self.value[key])
 
 
-     def insert(self, key, value):
 
-    self.value.insert(key, value)
 
 
-     # Printing and Formatting of tree
 
-     def valuestr(self):
 
-    return "[%i long(s)]" % len(self.value)
 
 
 
- class TAG_String(TAG, Sequence):
 
-     """
 
-     TAG_String, comparable to a collections.UserString with an
 
-     intrinsic name
 
-     """
 
-     id = TAG_STRING
 
 
-     def __init__(self, value=None, name=None, buffer=None):
 
-    super(TAG_String, self).__init__(value, name)
 
-    if buffer:
 
-       self._parse_buffer(buffer)
 
 
-     # Parsers and Generators
 
-     def _parse_buffer(self, buffer):
 
-    length = TAG_Short(buffer=buffer)
 
-    read = buffer.read(length.value)
 
-    if len(read) != length.value:
 
-       raise StructError()
 
-    self.value = read.decode("utf-8")
 
 
-     def _render_buffer(self, buffer):
 
-    save_val = self.value.encode("utf-8")
 
-    length = TAG_Short(len(save_val))
 
-    length._render_buffer(buffer)
 
-    buffer.write(save_val)
 
 
-     # Mixin methods
 
-     def __len__(self):
 
-    return len(self.value)
 
 
-     def __iter__(self):
 
-    return iter(self.value)
 
 
-     def __contains__(self, item):
 
-    return item in self.value
 
 
-     def __getitem__(self, key):
 
-    return self.value[key]
 
 
-     # Printing and Formatting of tree
 
-     def __repr__(self):
 
-    return self.value
 
 
 
- # == Collection Tags ==#
 
- class TAG_List(TAG, MutableSequence):
 
-     """
 
-     TAG_List, comparable to a collections.UserList with an intrinsic name
 
-     """
 
-     id = TAG_LIST
 
 
-     def __init__(self, type=None, value=None, name=None, buffer=None):
 
-    super(TAG_List, self).__init__(value, name)
 
-    if type:
 
-       self.tagID = type.id
 
-    else:
 
-       self.tagID = None
 
-    self.tags = []
 
-    if buffer:
 
-       self._parse_buffer(buffer)
 
-    # if self.tagID == None:
 
-    #  raise ValueError("No type specified for list: %s" % (name))
 
 
-     # Parsers and Generators
 
-     def _parse_buffer(self, buffer):
 
-    self.tagID = TAG_Byte(buffer=buffer).value
 
-    self.tags = []
 
-    length = TAG_Int(buffer=buffer)
 
-    for x in range(length.value):
 
-       self.tags.append(TAGLIST[self.tagID](buffer=buffer))
 
 
-     def _render_buffer(self, buffer):
 
-    TAG_Byte(self.tagID)._render_buffer(buffer)
 
-    length = TAG_Int(len(self.tags))
 
-    length._render_buffer(buffer)
 
-    for i, tag in enumerate(self.tags):
 
-       if tag.id != self.tagID:
 
-         raise ValueError(
 
-        "List element %d(%s) has type %d != container type %d" %
 
-        (i, tag, tag.id, self.tagID))
 
-       tag._render_buffer(buffer)
 
 
-     # Mixin methods
 
-     def __len__(self):
 
-    return len(self.tags)
 
 
-     def __iter__(self):
 
-    return iter(self.tags)
 
 
-     def __contains__(self, item):
 
-    return item in self.tags
 
 
-     def __getitem__(self, key):
 
-    return self.tags[key]
 
 
-     def __setitem__(self, key, value):
 
-    self.tags[key] = value
 
 
-     def __delitem__(self, key):
 
-    del (self.tags[key])
 
 
-     def insert(self, key, value):
 
-    self.tags.insert(key, value)
 
 
-     # Printing and Formatting of tree
 
-     def __repr__(self):
 
-    return "%i entries of type %s" % (
 
-       len(self.tags), TAGLIST[self.tagID].__name__)
 
 
-     # Printing and Formatting of tree
 
-     def valuestr(self):
 
-    return "[%i %s(s)]" % (len(self.tags), TAGLIST[self.tagID].__name__)
 
 
-     def __unicode__(self):
 
-    return "[" + ", ".join([tag.tag_info() for tag in self.tags]) + "]"
 
 
-     def __str__(self):
 
-    return "[" + ", ".join([tag.tag_info() for tag in self.tags]) + "]"
 
 
-     def pretty_tree(self, indent=0):
 
-    output = [super(TAG_List, self).pretty_tree(indent)]
 
-    if len(self.tags):
 
-       output.append(("\t" * indent) + "{")
 
-       output.extend([tag.pretty_tree(indent + 1) for tag in self.tags])
 
-       output.append(("\t" * indent) + "}")
 
-    return '\n'.join(output)
 
 
 
- class TAG_Compound(TAG, MutableMapping):
 
-     """
 
-     TAG_Compound, comparable to a collections.OrderedDict with an
 
-     intrinsic name
 
-     """
 
-     id = TAG_COMPOUND
 
 
-     def __init__(self, buffer=None, name=None):
 
-    # TODO: add a value parameter as well
 
-    super(TAG_Compound, self).__init__()
 
-    self.tags = []
 
-    self.name = ""
 
-    if buffer:
 
-       self._parse_buffer(buffer)
 
 
-     # Parsers and Generators
 
-     def _parse_buffer(self, buffer):
 
-    while True:
 
-       type = TAG_Byte(buffer=buffer)
 
-       if type.value == TAG_END:
 
-         # print("found tag_end")
 
-         break
 
-       else:
 
-         name = TAG_String(buffer=buffer).value
 
-         try:
 
-        tag = TAGLIST[type.value]()
 
-         except KeyError:
 
-        raise ValueError("Unrecognised tag type %d" % type.value)
 
-         tag.name = name
 
-         self.tags.append(tag)
 
-         tag._parse_buffer(buffer)
 
 
-     def _render_buffer(self, buffer):
 
-    for tag in self.tags:
 
-       TAG_Byte(tag.id)._render_buffer(buffer)
 
-       TAG_String(tag.name)._render_buffer(buffer)
 
-       tag._render_buffer(buffer)
 
-    buffer.write(b'\x00')# write TAG_END
 
 
-     # Mixin methods
 
-     def __len__(self):
 
-    return len(self.tags)
 
 
-     def __iter__(self):
 
-    for key in self.tags:
 
-       yield key.name
 
 
-     def __contains__(self, key):
 
-    if isinstance(key, int):
 
-       return key <= len(self.tags)
 
-    elif isinstance(key, basestring):
 
-       for tag in self.tags:
 
-         if tag.name == key:
 
-        return True
 
-       return False
 
-    elif isinstance(key, TAG):
 
-       return key in self.tags
 
-    return False
 
 
-     def __getitem__(self, key):
 
-    if isinstance(key, int):
 
-       return self.tags[key]
 
-    elif isinstance(key, basestring):
 
-       for tag in self.tags:
 
-         if tag.name == key:
 
-        return tag
 
-       else:
 
-         raise KeyError("Tag %s does not exist" % key)
 
-    else:
 
-       raise TypeError(
 
-         "key needs to be either name of tag, or index of tag, "
 
-         "not a %s" % type(key).__name__)
 
 
-     def __setitem__(self, key, value):
 
-    assert isinstance(value, TAG), "value must be an nbt.TAG"
 
-    if isinstance(key, int):
 
-       # Just try it. The proper error will be raised if it doesn't work.
 
-       self.tags[key] = value
 
-    elif isinstance(key, basestring):
 
-       value.name = key
 
-       for i, tag in enumerate(self.tags):
 
-         if tag.name == key:
 
-        self.tags = value
 
-        return
 
-       self.tags.append(value)
 
 
-     def __delitem__(self, key):
 
-    if isinstance(key, int):
 
-       del (self.tags[key])
 
-    elif isinstance(key, basestring):
 
-       self.tags.remove(self.__getitem__(key))
 
-    else:
 
-       raise ValueError(
 
-         "key needs to be either name of tag, or index of tag")
 
 
-     def keys(self):
 
-    return [tag.name for tag in self.tags]
 
 
-     def iteritems(self):
 
-    for tag in self.tags:
 
-       yield (tag.name, tag)
 
 
-     # Printing and Formatting of tree
 
-     def __unicode__(self):
 
-    return "{" + ", ".join([tag.tag_info() for tag in self.tags]) + "}"
 
 
-     def __str__(self):
 
-    return "{" + ", ".join([tag.tag_info() for tag in self.tags]) + "}"
 
 
-     def valuestr(self):
 
-    return '{%i Entries}' % len(self.tags)
 
 
-     def pretty_tree(self, indent=0):
 
-    output = [super(TAG_Compound, self).pretty_tree(indent)]
 
-    if len(self.tags):
 
-       output.append(("\t" * indent) + "{")
 
-       output.extend([tag.pretty_tree(indent + 1) for tag in self.tags])
 
-       output.append(("\t" * indent) + "}")
 
-    return '\n'.join(output)
 
 
 
- TAGLIST = {TAG_END: _TAG_End, TAG_BYTE: TAG_Byte, TAG_SHORT: TAG_Short,
 
-     TAG_INT: TAG_Int, TAG_LONG: TAG_Long, TAG_FLOAT: TAG_Float,
 
-     TAG_DOUBLE: TAG_Double, TAG_BYTE_ARRAY: TAG_Byte_Array,
 
-     TAG_STRING: TAG_String, TAG_LIST: TAG_List,
 
-     TAG_COMPOUND: TAG_Compound, TAG_INT_ARRAY: TAG_Int_Array,
 
-     TAG_LONG_ARRAY: TAG_Long_Array}
 
 
 
- class NBTFile(TAG_Compound):
 
-     """Represent an NBT file object."""
 
 
-     def __init__(self, filename=None, buffer=None, fileobj=None):
 
-    """
 
-    Create a new NBTFile object.
 
-    Specify either a filename, file object or data buffer.
 
-    If filename of file object is specified, data should be GZip-compressed.
 
-    If a data buffer is specified, it is assumed to be uncompressed.
 
-    If filename is specified, the file is closed after reading and writing.
 
-    If file object is specified, the caller is responsible for closing the
 
-    file.
 
-    """
 
-    super(NBTFile, self).__init__()
 
-    self.filename = filename
 
-    self.type = TAG_Byte(self.id)
 
-    closefile = True
 
-    # make a file object
 
-    if filename:
 
-       self.filename = filename
 
-       self.file = GzipFile(filename, 'rb')
 
-    elif buffer:
 
-       if hasattr(buffer, 'name'):
 
-         self.filename = buffer.name
 
-       self.file = buffer
 
-       closefile = False
 
-    elif fileobj:
 
-       if hasattr(fileobj, 'name'):
 
-         self.filename = fileobj.name
 
-       self.file = GzipFile(fileobj=fileobj)
 
-    else:
 
-       self.file = None
 
-       closefile = False
 
-    # parse the file given initially
 
-    if self.file:
 
-       self.parse_file()
 
-       if closefile:
 
-         # Note: GzipFile().close() does NOT close the fileobj,
 
-         # So we are still responsible for closing that.
 
-         try:
 
-        self.file.close()
 
-         except (AttributeError, IOError):
 
-        pass
 
-       self.file = None
 
 
-     def parse_file(self, filename=None, buffer=None, fileobj=None):
 
-    """Completely parse a file, extracting all tags."""
 
-    if filename:
 
-       self.file = GzipFile(filename, 'rb')
 
-    elif buffer:
 
-       if hasattr(buffer, 'name'):
 
-         self.filename = buffer.name
 
-       self.file = buffer
 
-    elif fileobj:
 
-       if hasattr(fileobj, 'name'):
 
-         self.filename = fileobj.name
 
-       self.file = GzipFile(fileobj=fileobj)
 
-    if self.file:
 
-       try:
 
-         type = TAG_Byte(buffer=self.file)
 
-         if type.value == self.id:
 
-        name = TAG_String(buffer=self.file).value
 
-        self._parse_buffer(self.file)
 
-        self.name = name
 
-        self.file.close()
 
-         else:
 
-        raise MalformedFileError(
 
-           "First record is not a Compound Tag")
 
-       except StructError as e:
 
-         raise MalformedFileError(
 
-        "Partial File Parse: file possibly truncated.")
 
-    else:
 
-       raise ValueError(
 
-         "NBTFile.parse_file(): Need to specify either a "
 
-         "filename or a file object"
 
-       )
 
 
-     def write_file(self, filename=None, buffer=None, fileobj=None):
 
-    """Write this NBT file to a file."""
 
-    closefile = True
 
-    if buffer:
 
-       self.filename = None
 
-       self.file = buffer
 
-       closefile = False
 
-    elif filename:
 
-       self.filename = filename
 
-       self.file = GzipFile(filename, "wb")
 
-    elif fileobj:
 
-       self.filename = None
 
-       self.file = GzipFile(fileobj=fileobj, mode="wb")
 
-    elif self.filename:
 
-       self.file = GzipFile(self.filename, "wb")
 
-    elif not self.file:
 
-       raise ValueError(
 
-         "NBTFile.write_file(): Need to specify either a "
 
-         "filename or a file object"
 
-       )
 
-    # Render tree to file
 
-    TAG_Byte(self.id)._render_buffer(self.file)
 
-    TAG_String(self.name)._render_buffer(self.file)
 
-    self._render_buffer(self.file)
 
-    # make sure the file is complete
 
-    try:
 
-       self.file.flush()
 
-    except (AttributeError, IOError):
 
-       pass
 
-    if closefile:
 
-       try:
 
-         self.file.close()
 
-       except (AttributeError, IOError):
 
-         pass
 
 
-     def __repr__(self):
 
-    """
 
-    Return a string (ascii formated for Python 2, unicode
 
-    for Python 3) describing the class, name and id for
 
-    debugging purposes.
 
-    """
 
-    if self.filename:
 
-       return "<%s(%r) with %s(%r) at 0x%x>" % (
 
-         self.__class__.__name__, self.filename,
 
-         TAG_Compound.__name__, self.name, id(self)
 
-       )
 
-    else:
 
-       return "<%s with %s(%r) at 0x%x>" % (
 
-         self.__class__.__name__, TAG_Compound.__name__,
 
-         self.name, id(self)
 
-       )
 
nbt/region.py
代码:
 
 
- """
 
- Handle a region file, containing 32x32 chunks.
 
- For more information about the region file format:
 
- https://minecraft.gamepedia.com/Region_file_format
 
- """
 
 
- from .nbt import NBTFile, MalformedFileError
 
- from struct import pack, unpack
 
- from collections import Mapping
 
- import zlib
 
- import gzip
 
- from io import BytesIO
 
- import time
 
- from os import SEEK_END
 
 
- # constants
 
 
- SECTOR_LENGTH = 4096
 
- """Constant indicating the length of a sector. A Region file is divided in sectors of 4096 bytes each."""
 
 
- # TODO: move status codes to an (Enum) object
 
 
- # Status is a number representing:
 
- # -5 = Error, the chunk is overlapping with another chunk
 
- # -4 = Error, the chunk length is too large to fit in the sector length in the region header
 
- # -3 = Error, chunk header has a 0 length
 
- # -2 = Error, chunk inside the header of the region file
 
- # -1 = Error, chunk partially/completely outside of file
 
- #0 = Ok
 
- #1 = Chunk non-existant yet
 
- STATUS_CHUNK_OVERLAPPING = -5
 
- """Constant indicating an error status: the chunk is allocated to a sector already occupied by another chunk"""
 
- STATUS_CHUNK_MISMATCHED_LENGTHS = -4
 
- """Constant indicating an error status: the region header length and the chunk length are incompatible"""
 
- STATUS_CHUNK_ZERO_LENGTH = -3
 
- """Constant indicating an error status: chunk header has a 0 length"""
 
- STATUS_CHUNK_IN_HEADER = -2
 
- """Constant indicating an error status: chunk inside the header of the region file"""
 
- STATUS_CHUNK_OUT_OF_FILE = -1
 
- """Constant indicating an error status: chunk partially/completely outside of file"""
 
- STATUS_CHUNK_OK = 0
 
- """Constant indicating an normal status: the chunk exists and the metadata is valid"""
 
- STATUS_CHUNK_NOT_CREATED = 1
 
- """Constant indicating an normal status: the chunk does not exist"""
 
 
- COMPRESSION_NONE = 0
 
- """Constant indicating that the chunk is not compressed."""
 
- COMPRESSION_GZIP = 1
 
- """Constant indicating that the chunk is GZip compressed."""
 
- COMPRESSION_ZLIB = 2
 
- """Constant indicating that the chunk is zlib compressed."""
 
 
 
- # TODO: reconsider these errors. where are they catched? Where would an implementation make a difference in handling the different exceptions.
 
 
- class RegionFileFormatError(Exception):
 
-     """Base class for all file format errors.
 
-     Note: InconceivedChunk is not a child class, because it is not considered a format error."""
 
-     def __init__(self, msg=""):
 
-    self.msg = msg
 
-     def __str__(self):
 
-    return self.msg
 
 
- class NoRegionHeader(RegionFileFormatError):
 
-     """The size of the region file is too small to contain a header."""
 
 
- class RegionHeaderError(RegionFileFormatError):
 
-     """Error in the header of the region file for a given chunk."""
 
 
- class ChunkHeaderError(RegionFileFormatError):
 
-     """Error in the header of a chunk, included the bytes of length and byte version."""
 
 
- class ChunkDataError(RegionFileFormatError):
 
-     """Error in the data of a chunk."""
 
 
- class InconceivedChunk(LookupError):
 
-     """Specified chunk has not yet been generated."""
 
-     def __init__(self, msg=""):
 
-    self.msg = msg
 
 
 
- class ChunkMetadata(object):
 
-     """
 
-     Metadata for a particular chunk found in the 8 kiByte header and 5-byte chunk header.
 
-     """
 
 
-     def __init__(self, x, z):
 
-    self.x = x
 
-    """x-coordinate of the chunk in the file"""
 
-    self.z = z
 
-    """z-coordinate of the chunk in the file"""
 
-    self.blockstart = 0
 
-    """start of the chunk block, counted in 4 kiByte sectors from the
 
-    start of the file. (24 bit int)"""
 
-    self.blocklength = 0
 
-    """amount of 4 kiBytes sectors in the block (8 bit int)"""
 
-    self.timestamp = 0
 
-    """a Unix timestamps (seconds since epoch) (32 bits), found in the
 
-    second sector in the file."""
 
-    self.length = 0
 
-    """length of the block in bytes. This excludes the 4-byte length header,
 
-    and includes the 1-byte compression byte. (32 bit int)"""
 
-    self.compression = None
 
-    """type of compression used for the chunk block. (8 bit int).
 
-     
 
-    - 0: uncompressed
 
-    - 1: gzip compression
 
-    - 2: zlib compression"""
 
-    self.status = STATUS_CHUNK_NOT_CREATED
 
-    """status as determined from blockstart, blocklength, length, file size
 
-    and location of other chunks in the file.
 
-    
 
-    - STATUS_CHUNK_OVERLAPPING
 
-    - STATUS_CHUNK_MISMATCHED_LENGTHS
 
-    - STATUS_CHUNK_ZERO_LENGTH
 
-    - STATUS_CHUNK_IN_HEADER
 
-    - STATUS_CHUNK_OUT_OF_FILE
 
-    - STATUS_CHUNK_OK
 
-    - STATUS_CHUNK_NOT_CREATED"""
 
-     def __str__(self):
 
-    return "%s(%d, %d, sector=%s, blocklength=%s, timestamp=%s, bytelength=%s, compression=%s, status=%s)" % \
 
-       (self.__class__.__name__, self.x, self.z, self.blockstart, self.blocklength, self.timestamp, \
 
-       self.length, self.compression, self.status)
 
-     def __repr__(self):
 
-    return "%s(%d,%d)" % (self.__class__.__name__, self.x, self.z)
 
-     def requiredblocks(self):
 
-    # slightly faster variant of: floor(self.length + 4) / 4096))
 
-    return (self.length + 3 + SECTOR_LENGTH) // SECTOR_LENGTH
 
-     def is_created(self):
 
-    """return True if this chunk is created according to the header.
 
-    This includes chunks which are not readable for other reasons."""
 
-    return self.blockstart != 0
 
 
- class _HeaderWrapper(Mapping):
 
-     """Wrapper around self.metadata to emulate the old self.header variable"""
 
-     def __init__(self, metadata):
 
-    self.metadata = metadata
 
-     def __getitem__(self, xz):
 
-    m = self.metadata[xz]
 
-    return (m.blockstart, m.blocklength, m.timestamp, m.status)
 
-     def __iter__(self):
 
-    return iter(self.metadata) # iterates over the keys
 
-     def __len__(self):
 
-    return len(self.metadata)
 
- class _ChunkHeaderWrapper(Mapping):
 
-     """Wrapper around self.metadata to emulate the old self.chunk_headers variable"""
 
-     def __init__(self, metadata):
 
-    self.metadata = metadata
 
-     def __getitem__(self, xz):
 
-    m = self.metadata[xz]
 
-    return (m.length if m.length > 0 else None, m.compression, m.status)
 
-     def __iter__(self):
 
-    return iter(self.metadata) # iterates over the keys
 
-     def __len__(self):
 
-    return len(self.metadata)
 
 
- class Location(object):
 
-     def __init__(self, x=None, y=None, z=None):
 
-    self.x = x
 
-    self.y = y
 
-    self.z = z
 
-     def __str__(self):
 
-    return "%s(x=%s, y=%s, z=%s)" % (self.__class__.__name__, self.x, self.y, self.z)
 
 
- class RegionFile(object):
 
-     """A convenience class for extracting NBT files from the Minecraft Beta Region Format."""
 
-     
 
-     # Redefine constants for backward compatibility.
 
-     STATUS_CHUNK_OVERLAPPING = STATUS_CHUNK_OVERLAPPING
 
-     """Constant indicating an error status: the chunk is allocated to a sector
 
-     already occupied by another chunk. 
 
-     Deprecated. Use :const:`nbt.region.STATUS_CHUNK_OVERLAPPING` instead."""
 
-     STATUS_CHUNK_MISMATCHED_LENGTHS = STATUS_CHUNK_MISMATCHED_LENGTHS
 
-     """Constant indicating an error status: the region header length and the chunk
 
-     length are incompatible. Deprecated. Use :const:`nbt.region.STATUS_CHUNK_MISMATCHED_LENGTHS` instead."""
 
-     STATUS_CHUNK_ZERO_LENGTH = STATUS_CHUNK_ZERO_LENGTH
 
-     """Constant indicating an error status: chunk header has a 0 length.
 
-     Deprecated. Use :const:`nbt.region.STATUS_CHUNK_ZERO_LENGTH` instead."""
 
-     STATUS_CHUNK_IN_HEADER = STATUS_CHUNK_IN_HEADER
 
-     """Constant indicating an error status: chunk inside the header of the region file.
 
-     Deprecated. Use :const:`nbt.region.STATUS_CHUNK_IN_HEADER` instead."""
 
-     STATUS_CHUNK_OUT_OF_FILE = STATUS_CHUNK_OUT_OF_FILE
 
-     """Constant indicating an error status: chunk partially/completely outside of file.
 
-     Deprecated. Use :const:`nbt.region.STATUS_CHUNK_OUT_OF_FILE` instead."""
 
-     STATUS_CHUNK_OK = STATUS_CHUNK_OK
 
-     """Constant indicating an normal status: the chunk exists and the metadata is valid.
 
-     Deprecated. Use :const:`nbt.region.STATUS_CHUNK_OK` instead."""
 
-     STATUS_CHUNK_NOT_CREATED = STATUS_CHUNK_NOT_CREATED
 
-     """Constant indicating an normal status: the chunk does not exist.
 
-     Deprecated. Use :const:`nbt.region.STATUS_CHUNK_NOT_CREATED` instead."""
 
-     
 
-     def __init__(self, filename=None, fileobj=None, chunkclass = None):
 
-    """
 
-    Read a region file by filename or file object. 
 
-    If a fileobj is specified, it is not closed after use; it is the callers responibility to close it.
 
-    """
 
-    self.file = None
 
-    self.filename = None
 
-    self._closefile = False
 
-    self.chunkclass = chunkclass
 
-    if filename:
 
-       self.filename = filename
 
-       self.file = open(filename, 'r+b') # open for read and write in binary mode
 
-       self._closefile = True
 
-    elif fileobj:
 
-       if hasattr(fileobj, 'name'):
 
-         self.filename = fileobj.name
 
-       self.file = fileobj
 
-    elif not self.file:
 
-       raise ValueError("RegionFile(): Need to specify either a filename or a file object")
 
 
-    # Some variables
 
-    self.metadata = {}
 
-    """
 
-    dict containing ChunkMetadata objects, gathered from metadata found in the
 
-    8 kiByte header and 5-byte chunk header.
 
-    
 
-    ``metadata[x, z]: ChunkMetadata()``
 
-    """
 
-    self.header = _HeaderWrapper(self.metadata)
 
-    """
 
-    dict containing the metadata found in the 8 kiByte header:
 
-    
 
-    ``header[x, z]: (offset, sectionlength, timestamp, status)``
 
-    
 
-    :offset: counts in 4 kiByte sectors, starting from the start of the file. (24 bit int)
 
-    :blocklength: is in 4 kiByte sectors (8 bit int)
 
-    :timestamp: is a Unix timestamps (seconds since epoch) (32 bits)
 
-    :status: can be any of:
 
-    
 
-       - STATUS_CHUNK_OVERLAPPING
 
-       - STATUS_CHUNK_MISMATCHED_LENGTHS
 
-       - STATUS_CHUNK_ZERO_LENGTH
 
-       - STATUS_CHUNK_IN_HEADER
 
-       - STATUS_CHUNK_OUT_OF_FILE
 
-       - STATUS_CHUNK_OK
 
-       - STATUS_CHUNK_NOT_CREATED
 
-    
 
-    Deprecated. Use :attr:`metadata` instead.
 
-    """
 
-    self.chunk_headers = _ChunkHeaderWrapper(self.metadata)
 
-    """
 
-    dict containing the metadata found in each chunk block:
 
-    
 
-    ``chunk_headers[x, z]: (length, compression, chunk_status)``
 
-    
 
-    :chunk length: in bytes, starting from the compression byte (32 bit int)
 
-    :compression: is 1 (Gzip) or 2 (bzip) (8 bit int)
 
-    :chunk_status: is equal to status in :attr:`header`.
 
-    
 
-    If the chunk is not defined, the tuple is (None, None, STATUS_CHUNK_NOT_CREATED)
 
-    
 
-    Deprecated. Use :attr:`metadata` instead.
 
-    """
 
 
-    self.loc = Location()
 
-    """Optional: x,z location of a region within a world."""
 
-    
 
-    self._init_header()
 
-    self._parse_header()
 
-    self._parse_chunk_headers()
 
 
-     def get_size(self):
 
-    """ Returns the file size in bytes. """
 
-    # seek(0,2) jumps to 0-bytes from the end of the file.
 
-    # Python 2.6 support: seek does not yet return the position.
 
-    self.file.seek(0, SEEK_END)
 
-    return self.file.tell()
 
 
-     @staticmethod
 
-     def _bytes_to_sector(bsize, sectorlength=SECTOR_LENGTH):
 
-    """Given a size in bytes, return how many sections of length sectorlen are required to contain it.
 
-    This is equivalent to ceil(bsize/sectorlen), if Python would use floating
 
-    points for division, and integers for ceil(), rather than the other way around."""
 
-    sectors, remainder = divmod(bsize, sectorlength)
 
-    return sectors if remainder == 0 else sectors + 1
 
-     
 
-     def close(self):
 
-    """
 
-    Clean up resources after use.
 
-    
 
-    Note that the instance is no longer readable nor writable after calling close().
 
-    The method is automatically called by garbage collectors, but made public to
 
-    allow explicit cleanup.
 
-    """
 
-    if self._closefile:
 
-       try:
 
-         self.file.close()
 
-       except IOError:
 
-         pass
 
 
-     def __del__(self):
 
-    self.close()
 
-    # Parent object() has no __del__ method, otherwise it should be called here.
 
 
-     def _init_file(self):
 
-    """Initialise the file header. This will erase any data previously in the file."""
 
-    header_length = 2*SECTOR_LENGTH
 
-    if self.size > header_length:
 
-       self.file.truncate(header_length)
 
-    self.file.seek(0)
 
-    self.file.write(header_length*b'\x00')
 
-    self.size = header_length
 
 
-     def _init_header(self):
 
-    for x in range(32):
 
-       for z in range(32):
 
-         self.metadata[x,z] = ChunkMetadata(x, z)
 
 
-     def _parse_header(self):
 
-    """Read the region header and stores: offset, length and status."""
 
-    # update the file size, needed when parse_header is called after
 
-    # we have unlinked a chunk or writed a new one
 
-    self.size = self.get_size()
 
 
-    if self.size == 0:
 
-       # Some region files seems to have 0 bytes of size, and
 
-       # Minecraft handle them without problems. Take them
 
-       # as empty region files.
 
-       return
 
-    elif self.size < 2*SECTOR_LENGTH:
 
-       raise NoRegionHeader('The region file is %d bytes, too small in size to have a header.' % self.size)
 
-    
 
-    for index in range(0, SECTOR_LENGTH, 4):
 
-       x = int(index//4) % 32
 
-       z = int(index//4)//32
 
-       m = self.metadata[x, z]
 
-       
 
-       self.file.seek(index)
 
-       offset, length = unpack(">IB", b"\0" + self.file.read(4))
 
-       m.blockstart, m.blocklength = offset, length
 
-       self.file.seek(index + SECTOR_LENGTH)
 
-       m.timestamp = unpack(">I", self.file.read(4))[0]
 
-       
 
-       if offset == 0 and length == 0:
 
-         m.status = STATUS_CHUNK_NOT_CREATED
 
-       elif length == 0:
 
-         m.status = STATUS_CHUNK_ZERO_LENGTH
 
-       elif offset < 2 and offset != 0:
 
-         m.status = STATUS_CHUNK_IN_HEADER
 
-       elif SECTOR_LENGTH * offset + 5 > self.size:
 
-         # Chunk header can't be read.
 
-         m.status = STATUS_CHUNK_OUT_OF_FILE
 
-       else:
 
-         m.status = STATUS_CHUNK_OK
 
-    
 
-    # Check for chunks overlapping in the file
 
-    for chunks in self._sectors()[2:]:
 
-       if len(chunks) > 1:
 
-         # overlapping chunks
 
-         for m in chunks:
 
-        # Update status, unless these more severe errors take precedence
 
-        if m.status not in (STATUS_CHUNK_ZERO_LENGTH, STATUS_CHUNK_IN_HEADER, 
 
-                 STATUS_CHUNK_OUT_OF_FILE):
 
-           m.status = STATUS_CHUNK_OVERLAPPING
 
 
-     def _parse_chunk_headers(self):
 
-    for x in range(32):
 
-       for z in range(32):
 
-         m = self.metadata[x, z]
 
-         if m.status not in (STATUS_CHUNK_OK, STATUS_CHUNK_OVERLAPPING, \
 
-               STATUS_CHUNK_MISMATCHED_LENGTHS):
 
-        # skip to next if status is NOT_CREATED, OUT_OF_FILE, IN_HEADER,
 
-        # ZERO_LENGTH or anything else.
 
-        continue
 
-         try:
 
-        self.file.seek(m.blockstart*SECTOR_LENGTH) # offset comes in sectors of 4096 bytes
 
-        length = unpack(">I", self.file.read(4))
 
-        m.length = length[0] # unpack always returns a tuple, even unpacking one element
 
-        compression = unpack(">B",self.file.read(1))
 
-        m.compression = compression[0]
 
-         except IOError:
 
-        m.status = STATUS_CHUNK_OUT_OF_FILE
 
-        continue
 
-         if m.blockstart*SECTOR_LENGTH + m.length + 4 > self.size:
 
-        m.status = STATUS_CHUNK_OUT_OF_FILE
 
-         elif m.length <= 1: # chunk can't be zero length
 
-        m.status = STATUS_CHUNK_ZERO_LENGTH
 
-         elif m.length + 4 > m.blocklength * SECTOR_LENGTH:
 
-        # There are not enough sectors allocated for the whole block
 
-        m.status = STATUS_CHUNK_MISMATCHED_LENGTHS
 
 
-     def _sectors(self, ignore_chunk=None):
 
-    """
 
-    Return a list of all sectors, each sector is a list of chunks occupying the block.
 
-    """
 
-    sectorsize = self._bytes_to_sector(self.size)
 
-    sectors = [[] for s in range(sectorsize)]
 
-    sectors[0] = True # locations
 
-    sectors[1] = True # timestamps
 
-    for m in self.metadata.values():
 
-       if not m.is_created():
 
-         continue
 
-       if ignore_chunk == m:
 
-         continue
 
-       if m.blocklength and m.blockstart:
 
-         blockend = m.blockstart + max(m.blocklength, m.requiredblocks())
 
-         # Ensure 2 <= b < sectorsize, as well as m.blockstart <= b < blockend
 
-         for b in range(max(m.blockstart, 2), min(blockend, sectorsize)):
 
-        sectors.append(m)
 
-    return sectors
 
 
-     def _locate_free_sectors(self, ignore_chunk=None):
 
-    """Return a list of booleans, indicating the free sectors."""
 
-    sectors = self._sectors(ignore_chunk=ignore_chunk)
 
-    # Sectors are considered free, if the value is an empty list.
 
-    return [not i for i in sectors]
 
 
-     def _find_free_location(self, free_locations, required_sectors=1, preferred=None):
 
-    """
 
-    Given a list of booleans, find a list of <required_sectors> consecutive True values.
 
-    If no such list is found, return length(free_locations).
 
-    Assumes first two values are always False.
 
-    """
 
-    # check preferred (current) location
 
-    if preferred and all(free_locations[preferred:preferred+required_sectors]):
 
-       return preferred
 
-    
 
-    # check other locations
 
-    # Note: the slicing may exceed the free_location boundary.
 
-    # This implementation relies on the fact that slicing will work anyway,
 
-    # and the any() function returns True for an empty list. This ensures
 
-    # that blocks outside the file are considered Free as well.
 
-    
 
-    i = 2 # First two sectors are in use by the header
 
-    while i < len(free_locations):
 
-       if all(free_locations[i:i+required_sectors]):
 
-         break
 
-       i += 1
 
-    return i
 
 
-     def get_metadata(self):
 
-    """
 
-    Return a list of the metadata of each chunk that is defined in te regionfile.
 
-    This includes chunks which may not be readable for whatever reason,
 
-    but excludes chunks that are not yet defined.
 
-    """
 
-    return [m for m in self.metadata.values() if m.is_created()]
 
 
-     def get_chunks(self):
 
-    """
 
-    Return the x,z coordinates and length of the chunks that are defined in te regionfile.
 
-    This includes chunks which may not be readable for whatever reason.
 
-    Warning: despite the name, this function does not actually return the chunk,
 
-    but merely it's metadata. Use get_chunk(x,z) to get the NBTFile, and then Chunk()
 
-    to get the actual chunk.
 
-    
 
-    This method is deprecated. Use :meth:`get_metadata` instead.
 
-    """
 
-    return self.get_chunk_coords()
 
 
-     def get_chunk_coords(self):
 
-    """
 
-    Return the x,z coordinates and length of the chunks that are defined in te regionfile.
 
-    This includes chunks which may not be readable for whatever reason.
 
-    
 
-    This method is deprecated. Use :meth:`get_metadata` instead.
 
-    """
 
-    chunks = []
 
-    for x in range(32):
 
-       for z in range(32):
 
-         m = self.metadata[x,z]
 
-         if m.is_created():
 
-        chunks.append({'x': x, 'z': z, 'length': m.blocklength})
 
-    return chunks
 
 
-     def iter_chunks(self):
 
-    """
 
-    Yield each readable chunk present in the region.
 
-    Chunks that can not be read for whatever reason are silently skipped.
 
-    Warning: this function returns a :class:`nbt.nbt.NBTFile` object, use ``Chunk(nbtfile)`` to get a
 
-    :class:`nbt.chunk.Chunk` instance.
 
-    """
 
-    for m in self.get_metadata():
 
-       try:
 
-         yield self.get_chunk(m.x, m.z)
 
-       except RegionFileFormatError:
 
-         pass
 
 
-     # The following method will replace 'iter_chunks'
 
-     # but the previous is kept for the moment
 
-     # until the users update their code
 
 
-     def iter_chunks_class(self):
 
-    """
 
-    Yield each readable chunk present in the region.
 
-    Chunks that can not be read for whatever reason are silently skipped.
 
-    This function returns a :class:`nbt.chunk.Chunk` instance.
 
-    """
 
-    for m in self.get_metadata():
 
-       try:
 
-         yield self.chunkclass(self.get_chunk(m.x, m.z))
 
-       except RegionFileFormatError:
 
-         pass
 
 
-     def __iter__(self):
 
-    return self.iter_chunks()
 
 
-     def get_timestamp(self, x, z):
 
-    """
 
-    Return the timestamp of when this region file was last modified.
 
-    
 
-    Note that this returns the timestamp as-is. A timestamp may exist, 
 
-    while the chunk does not, or it may return a timestamp of 0 even 
 
-    while the chunk exists.
 
-    
 
-    To convert to an actual date, use `datetime.fromtimestamp()`.
 
-    """
 
-    return self.metadata[x,z].timestamp
 
 
-     def chunk_count(self):
 
-    """Return the number of defined chunks. This includes potentially corrupt chunks."""
 
-    return len(self.get_metadata())
 
 
-     def get_blockdata(self, x, z):
 
-    """
 
-    Return the decompressed binary data representing a chunk.
 
-    
 
-    May raise a RegionFileFormatError().
 
-    If decompression of the data succeeds, all available data is returned, 
 
-    even if it is shorter than what is specified in the header (e.g. in case
 
-    of a truncated while and non-compressed data).
 
-    """
 
-    # read metadata block
 
-    m = self.metadata[x, z]
 
-    if m.status == STATUS_CHUNK_NOT_CREATED:
 
-       raise InconceivedChunk("Chunk %d,%d is not present in region" % (x,z))
 
-    elif m.status == STATUS_CHUNK_IN_HEADER:
 
-       raise RegionHeaderError('Chunk %d,%d is in the region header' % (x,z))
 
-    elif m.status == STATUS_CHUNK_OUT_OF_FILE and (m.length <= 1 or m.compression == None):
 
-       # Chunk header is outside of the file.
 
-       raise RegionHeaderError('Chunk %d,%d is partially/completely outside the file' % (x,z))
 
-    elif m.status == STATUS_CHUNK_ZERO_LENGTH:
 
-       if m.blocklength == 0:
 
-         raise RegionHeaderError('Chunk %d,%d has zero length' % (x,z))
 
-       else:
 
-         raise ChunkHeaderError('Chunk %d,%d has zero length' % (x,z))
 
-    elif m.blockstart * SECTOR_LENGTH + 5 >= self.size:
 
-       raise RegionHeaderError('Chunk %d,%d is partially/completely outside the file' % (x,z))
 
 
-    # status is STATUS_CHUNK_OK, STATUS_CHUNK_MISMATCHED_LENGTHS, STATUS_CHUNK_OVERLAPPING
 
-    # or STATUS_CHUNK_OUT_OF_FILE.
 
-    # The chunk is always read, but in case of an error, the exception may be different 
 
-    # based on the status.
 
 
-    err = None
 
-    try:
 
-       # offset comes in sectors of 4096 bytes + length bytes + compression byte
 
-       self.file.seek(m.blockstart * SECTOR_LENGTH + 5)
 
-       # Do not read past the length of the file.
 
-       # The length in the file includes the compression byte, hence the -1.
 
-       length = min(m.length - 1, self.size - (m.blockstart * SECTOR_LENGTH + 5))
 
-       chunk = self.file.read(length)
 
-       
 
-       if (m.compression == COMPRESSION_GZIP):
 
-         # Python 3.1 and earlier do not yet support gzip.decompress(chunk)
 
-         f = gzip.GzipFile(fileobj=BytesIO(chunk))
 
-         chunk = bytes(f.read())
 
-         f.close()
 
-       elif (m.compression == COMPRESSION_ZLIB):
 
-         chunk = zlib.decompress(chunk)
 
-       elif m.compression != COMPRESSION_NONE:
 
-         raise ChunkDataError('Unknown chunk compression/format (%s)' % m.compression)
 
-       
 
-       return chunk
 
-    except RegionFileFormatError:
 
-       raise
 
-    except Exception as e:
 
-       # Deliberately catch the Exception and re-raise.
 
-       # The details in gzip/zlib/nbt are irrelevant, just that the data is garbled.
 
-       err = '%s' % e # avoid str(e) due to Unicode issues in Python 2.
 
-    if err:
 
-       # don't raise during exception handling to avoid the warning 
 
-       # "During handling of the above exception, another exception occurred".
 
-       # Python 3.3 solution (see PEP 409 & 415): "raise ChunkDataError(str(e)) from None"
 
-       if m.status == STATUS_CHUNK_MISMATCHED_LENGTHS:
 
-         raise ChunkHeaderError('The length in region header and the length in the header of chunk %d,%d are incompatible' % (x,z))
 
-       elif m.status == STATUS_CHUNK_OVERLAPPING:
 
-         raise ChunkHeaderError('Chunk %d,%d is overlapping with another chunk' % (x,z))
 
-       else:
 
-         raise ChunkDataError(err)
 
 
-     def get_nbt(self, x, z):
 
-    """
 
-    Return a NBTFile of the specified chunk.
 
-    Raise InconceivedChunk if the chunk is not included in the file.
 
-    """
 
-    # TODO: cache results?
 
-    data = self.get_blockdata(x, z) # This may raise a RegionFileFormatError.
 
-    data = BytesIO(data)
 
-    err = None
 
-    try:
 
-       nbt = NBTFile(buffer=data)
 
-       if self.loc.x != None:
 
-         x += self.loc.x*32
 
-       if self.loc.z != None:
 
-         z += self.loc.z*32
 
-       nbt.loc = Location(x=x, z=z)
 
-       return nbt
 
-       # this may raise a MalformedFileError. Convert to ChunkDataError.
 
-    except MalformedFileError as e:
 
-       err = '%s' % e # avoid str(e) due to Unicode issues in Python 2.
 
-    if err:
 
-       raise ChunkDataError(err)
 
 
-     def get_chunk(self, x, z):
 
-    """
 
-    Return a NBTFile of the specified chunk.
 
-    Raise InconceivedChunk if the chunk is not included in the file.
 
-    
 
-    Note: this function may be changed later to return a Chunk() rather 
 
-    than a NBTFile() object. To keep the old functionality, use get_nbt().
 
-    """
 
-    return self.get_nbt(x, z)
 
 
-     def write_blockdata(self, x, z, data, compression=COMPRESSION_ZLIB):
 
-    """
 
-    Compress the data, write it to file, and add pointers in the header so it 
 
-    can be found as chunk(x,z).
 
-    """
 
-    if compression == COMPRESSION_GZIP:
 
-       # Python 3.1 and earlier do not yet support `data = gzip.compress(data)`.
 
-       compressed_file = BytesIO()
 
-       f = gzip.GzipFile(fileobj=compressed_file)
 
-       f.write(data)
 
-       f.close()
 
-       compressed_file.seek(0)
 
-       data = compressed_file.read()
 
-       del compressed_file
 
-    elif compression == COMPRESSION_ZLIB:
 
-       data = zlib.compress(data) # use zlib compression, rather than Gzip
 
-    elif compression != COMPRESSION_NONE:
 
-       raise ValueError("Unknown compression type %d" % compression)
 
-    length = len(data)
 
 
-    # 5 extra bytes are required for the chunk block header
 
-    nsectors = self._bytes_to_sector(length + 5)
 
 
-    if nsectors >= 256:
 
-       raise ChunkDataError("Chunk is too large (%d sectors exceeds 255 maximum)" % (nsectors))
 
 
-    # Ensure file has a header
 
-    if self.size < 2*SECTOR_LENGTH:
 
-       self._init_file()
 
 
-    # search for a place where to write the chunk:
 
-    current = self.metadata[x, z]
 
-    free_sectors = self._locate_free_sectors(ignore_chunk=current)
 
-    sector = self._find_free_location(free_sectors, nsectors, preferred=current.blockstart)
 
 
-    # If file is smaller than sector*SECTOR_LENGTH (it was truncated), pad it with zeroes.
 
-    if self.size < sector*SECTOR_LENGTH:
 
-       # jump to end of file
 
-       self.file.seek(0, SEEK_END)
 
-       self.file.write((sector*SECTOR_LENGTH - self.size) * b"\x00")
 
-       assert self.file.tell() == sector*SECTOR_LENGTH
 
 
-    # write out chunk to region
 
-    self.file.seek(sector*SECTOR_LENGTH)
 
-    self.file.write(pack(">I", length + 1)) #length field
 
-    self.file.write(pack(">B", compression)) #compression field
 
-    self.file.write(data) #compressed data
 
 
-    # Write zeros up to the end of the chunk
 
-    remaining_length = SECTOR_LENGTH * nsectors - length - 5
 
-    self.file.write(remaining_length * b"\x00")
 
 
-    #seek to header record and write offset and length records
 
-    self.file.seek(4 * (x + 32*z))
 
-    self.file.write(pack(">IB", sector, nsectors)[1:])
 
 
-    #write timestamp
 
-    self.file.seek(SECTOR_LENGTH + 4 * (x + 32*z))
 
-    timestamp = int(time.time())
 
-    self.file.write(pack(">I", timestamp))
 
 
-    # Update free_sectors with newly written block
 
-    # This is required for calculating file truncation and zeroing freed blocks.
 
-    free_sectors.extend((sector + nsectors - len(free_sectors)) * [True])
 
-    for s in range(sector, sector + nsectors):
 
-       free_sectors= False
 
 
- # Check if file should be truncated:
 
- truncate_count = list(reversed(free_sectors)).index(False)
 
- if truncate_count > 0:
 
- self.size = SECTOR_LENGTH * (len(free_sectors) - truncate_count)
 
- self.file.truncate(self.size)
 
- free_sectors = free_sectors[:-truncate_count]
 
 
- # Calculate freed sectors
 
- for s in range(current.blockstart, min(current.blockstart + current.blocklength, len(free_sectors))):
 
- if free_sectors- :
 
- # zero sector s
 
- self.file.seek(SECTOR_LENGTH*s)
 
- self.file.write(SECTOR_LENGTH*b'\x00')
 
 
- # update file size and header information
 
- self.size = max((sector + nsectors)*SECTOR_LENGTH, self.size)
 
- assert self.get_size() == self.size
 
- current.blockstart = sector
 
- current.blocklength = nsectors
 
- current.status = STATUS_CHUNK_OK
 
- current.timestamp = timestamp
 
- current.length = length + 1
 
- current.compression = COMPRESSION_ZLIB
 
 
- # self.parse_header()
 
- # self.parse_chunk_headers()
 
 
- def write_chunk(self, x, z, nbt_file):
 
- """
 
- Pack the NBT file as binary data, and write to file in a compressed format.
 
- """
 
- data = BytesIO()
 
- nbt_file.write_file(buffer=data) # render to buffer; uncompressed
 
- self.write_blockdata(x, z, data.getvalue())
 
 
- def unlink_chunk(self, x, z):
 
- """
 
- Remove a chunk from the header of the region file.
 
- Fragmentation is not a problem, chunks are written to free sectors when possible.
 
- """
 
- # This function fails for an empty file. If that is the case, just return.
 
- if self.size < 2*SECTOR_LENGTH:
 
- return
 
 
- # zero the region header for the chunk (offset length and time)
 
- self.file.seek(4 * (x + 32*z))
 
- self.file.write(pack(">IB", 0, 0)[1:])
 
- self.file.seek(SECTOR_LENGTH + 4 * (x + 32*z))
 
- self.file.write(pack(">I", 0))
 
 
- # Check if file should be truncated:
 
- current = self.metadata[x, z]
 
- free_sectors = self._locate_free_sectors(ignore_chunk=current)
 
- truncate_count = list(reversed(free_sectors)).index(False)
 
- if truncate_count > 0:
 
- self.size = SECTOR_LENGTH * (len(free_sectors) - truncate_count)
 
- self.file.truncate(self.size)
 
- free_sectors = free_sectors[:-truncate_count]
 
 
- # Calculate freed sectors
 
- for s in range(current.blockstart, min(current.blockstart + current.blocklength, len(free_sectors))):
 
- if free_sectors- :
 
- # zero sector s
 
- self.file.seek(SECTOR_LENGTH*s)
 
- self.file.write(SECTOR_LENGTH*b'\x00')
 
 
- # update the header
 
- self.metadata[x, z] = ChunkMetadata(x, z)
 
 
- def _classname(self):
 
- """Return the fully qualified class name."""
 
- if self.__class__.__module__ in (None,):
 
- return self.__class__.__name__
 
- else:
 
- return "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
 
 
- def __str__(self):
 
- if self.filename:
 
- return "<%s(%r)>" % (self._classname(), self.filename)
 
- else:
 
- return '<%s object at %d>' % (self._classname(), id(self))
 
 
- def __repr__(self):
 
- if self.filename:
 
- return "%s(%r)" % (self._classname(), self.filename)
 
- else:
 
- return '<%s object at %d>' % (self._classname(), id(self))
 
nbt/world.py
代码:
 
- """
 
- Handles a Minecraft world save using either the Anvil or McRegion format.
 
- For more information about the world format:
 
- https://minecraft.gamepedia.com/Level_format
 
- """
 
 
- import os, glob, re
 
- from . import region
 
- from . import chunk
 
- from .region import InconceivedChunk, Location
 
 
- class UnknownWorldFormat(Exception):
 
-     """Unknown or invalid world folder."""
 
-     def __init__(self, msg=""):
 
-    self.msg = msg
 
 
 
- class _BaseWorldFolder(object):
 
-     """
 
-     Abstract class, representing either a McRegion or Anvil world folder.
 
-     This class will use either Anvil or McRegion, with Anvil the preferred format.
 
-     Simply calling WorldFolder() will do this automatically.
 
-     """
 
-     type = "Generic"
 
-     extension = ''
 
-     chunkclass = chunk.Chunk
 
 
-     def __init__(self, world_folder):
 
-    """Initialize a WorldFolder."""
 
-    self.worldfolder = world_folder
 
-    self.regionfiles = {}
 
-    self.regions  = {}
 
-    self.chunks= None
 
-    # os.listdir triggers an OSError for non-existant directories or permission errors.
 
-    # This is needed, because glob.glob silently returns no files.
 
-    os.listdir(world_folder)
 
-    self.set_regionfiles(self.get_filenames())
 
 
-     def get_filenames(self):
 
-    """Find all matching file names in the world folder.
 
-    
 
-    This method is private, and it's use it deprecated. Use get_regionfiles() instead."""
 
-    # Warning: glob returns a empty list if the directory is unreadable, without raising an Exception
 
-    return list(glob.glob(os.path.join(self.worldfolder,'region','r.*.*.'+self.extension)))
 
 
-     def set_regionfiles(self, filenames):
 
-    """
 
-    This method directly sets the region files for this instance to use.
 
-    It assumes the filenames are in the form r.<x-digit>.<z-digit>.<extension>
 
-    """
 
-    for filename in filenames:
 
-       # Assume that filenames have the name r.<x-digit>.<z-digit>.<extension>
 
-       m = re.match(r"r.(\-?\d+).(\-?\d+)."+self.extension, os.path.basename(filename))
 
-       if m:
 
-         x = int(m.group(1))
 
-         z = int(m.group(2))
 
-       else:
 
-         # Only raised if a .mca of .mcr file exists which does not comply to the
 
-         #r.<x-digit>.<z-digit>.<extension> filename format. This may raise false
 
-         # errors if a copy is made, e.g. "r.0.-1 copy.mca". If this is an issue, override
 
-         # get_filenames(). In most cases, it is an error, and we like to raise that.
 
-         # Changed, no longer raise error, because we want to continue the loop.
 
-         # raise UnknownWorldFormat("Unrecognized filename format %s" % os.path.basename(filename))
 
-         # TODO: log to stderr using logging facility.
 
-         pass
 
-       self.regionfiles[(x,z)] = filename
 
 
-     def get_regionfiles(self):
 
-    """Return a list of full path of all region files."""
 
-    return list(self.regionfiles.values())
 
 
-     def nonempty(self):
 
-    """Return True is the world is non-empty."""
 
-    return len(self.regionfiles) > 0
 
 
-     def get_region(self, x,z):
 
-    """Get a region using x,z coordinates of a region. Cache results."""
 
-    if (x,z) not in self.regions:
 
-       if (x,z) in self.regionfiles:
 
-         self.regions[(x,z)] = region.RegionFile(self.regionfiles[(x,z)])
 
-       else:
 
-         # Return an empty RegionFile object
 
-         # TODO: this does not yet allow for saving of the region file
 
-         # TODO: this currently fails with a ValueError!
 
-         # TODO: generate the correct name, and create the file
 
-         # and add the fie to self.regionfiles
 
-         self.regions[(x,z)] = region.RegionFile()
 
-       self.regions[(x,z)].loc = Location(x=x,z=z)
 
-    return self.regions[(x,z)]
 
 
-     def iter_regions(self):
 
-    """
 
-    Return an iterable list of all region files. Use this function if you only
 
-    want to loop through each region files once, and do not want to cache the results.
 
-    """
 
-    # TODO: Implement BoundingBox
 
-    # TODO: Implement sort order
 
-    for x,z in self.regionfiles.keys():
 
-       close_after_use = False
 
-       if (x,z) in self.regions:
 
-         regionfile = self.regions[(x,z)]
 
-       else:
 
-         # It is not yet cached.
 
-         # Get file, but do not cache later.
 
-         regionfile = region.RegionFile(self.regionfiles[(x,z)], chunkclass = self.chunkclass)
 
-         regionfile.loc = Location(x=x,z=z)
 
-         close_after_use = True
 
-       try:
 
-         yield regionfile
 
-       finally:
 
-         if close_after_use:
 
-        regionfile.close()
 
 
-     def call_for_each_region(self, callback_function, boundingbox=None):
 
-    """
 
-    Return an iterable that calls callback_function for each region file 
 
-    in the world. This is equivalent to:
 
-    ```
 
-    for the_region in iter_regions():
 
-         yield callback_function(the_region)
 
-    ````
 
-    
 
-    This function is threaded. It uses pickle to pass values between threads.
 
-    See [What can be pickled and unpickled?](https://docs.python.org/library/pickle.html#what-can-be-pickled-and-unpickled) in the Python documentation
 
-    for limitation on the output of `callback_function()`.
 
-    """
 
-    raise NotImplementedError()
 
 
-     def get_nbt(self,x,z):
 
-    """
 
-    Return a NBT specified by the chunk coordinates x,z. Raise InconceivedChunk
 
-    if the NBT file is not yet generated. To get a Chunk object, use get_chunk.
 
-    """
 
-    rx,cx = divmod(x,32)
 
-    rz,cz = divmod(z,32)
 
-    if (rx,rz) not in self.regions and (rx,rz) not in self.regionfiles:
 
-       raise InconceivedChunk("Chunk %s,%s is not present in world" % (x,z))
 
-    nbt = self.get_region(rx,rz).get_nbt(cx,cz)
 
-    assert nbt != None
 
-    return nbt
 
 
-     def set_nbt(self,x,z,nbt):
 
-    """
 
-    Set a chunk. Overrides the NBT if it already existed. If the NBT did not exists,
 
-    adds it to the Regionfile. May create a new Regionfile if that did not exist yet.
 
-    nbt must be a nbt.NBTFile instance, not a Chunk or regular TAG_Compound object.
 
-    """
 
-    raise NotImplementedError()
 
-    # TODO: implement
 
 
-     def iter_nbt(self):
 
-    """
 
-    Return an iterable list of all NBT. Use this function if you only
 
-    want to loop through the chunks once, and don't need the block or data arrays.
 
-    """
 
-    # TODO: Implement BoundingBox
 
-    # TODO: Implement sort order
 
-    for region in self.iter_regions():
 
-       for c in region.iter_chunks():
 
-         yield c
 
 
-     def call_for_each_nbt(self, callback_function, boundingbox=None):
 
-    """
 
-    Return an iterable that calls callback_function for each NBT structure 
 
-    in the world. This is equivalent to:
 
-    ```
 
-    for the_nbt in iter_nbt():
 
-         yield callback_function(the_nbt)
 
-    ````
 
-    
 
-    This function is threaded. It uses pickle to pass values between threads.
 
-    See [What can be pickled and unpickled?](https://docs.python.org/library/pickle.html#what-can-be-pickled-and-unpickled) in the Python documentation
 
-    for limitation on the output of `callback_function()`.
 
-    """
 
-    raise NotImplementedError()
 
 
-     def get_chunk(self,x,z):
 
-    """
 
-    Return a chunk specified by the chunk coordinates x,z. Raise InconceivedChunk
 
-    if the chunk is not yet generated. To get the raw NBT data, use get_nbt.
 
-    """
 
-    return self.chunkclass(self.get_nbt(x, z))
 
 
-     def get_chunks(self, boundingbox=None):
 
-    """
 
-    Return a list of all chunks. Use this function if you access the chunk
 
-    list frequently and want to cache the result.
 
-    Use iter_chunks() if you only want to loop through the chunks once or have a
 
-    very large world.
 
-    """
 
-    if self.chunks == None:
 
-       self.chunks = list(self.iter_chunks())
 
-    return self.chunks
 
 
-     def iter_chunks(self):
 
-    """
 
-    Return an iterable list of all chunks. Use this function if you only
 
-    want to loop through the chunks once or have a very large world.
 
-    Use get_chunks() if you access the chunk list frequently and want to cache
 
-    the results. Use iter_nbt() if you are concerned about speed and don't want
 
-    to parse the block data.
 
-    """
 
-    # TODO: Implement BoundingBox
 
-    # TODO: Implement sort order
 
-    for c in self.iter_nbt():
 
-       yield self.chunkclass(c)
 
 
-     def chunk_count(self):
 
-    """Return a count of the chunks in this world folder."""
 
-    c = 0
 
-    for r in self.iter_regions():
 
-       c += r.chunk_count()
 
-    return c
 
 
-     def get_boundingbox(self):
 
-    """
 
-    Return minimum and maximum x and z coordinates of the chunks that
 
-    make up this world save
 
-    """
 
-    b = BoundingBox()
 
-    for rx,rz in self.regionfiles.keys():
 
-       region = self.get_region(rx,rz)
 
-       rx,rz = 32*rx,32*rz
 
-       for cc in region.get_chunk_coords():
 
-         x,z = (rx+cc['x'],rz+cc['z'])
 
-         b.expand(x,None,z)
 
-    return b
 
 
-     def __repr__(self):
 
-    return "%s(%r)" % (self.__class__.__name__,self.worldfolder)
 
 
 
- class McRegionWorldFolder(_BaseWorldFolder):
 
-     """Represents a world save using the old McRegion format."""
 
-     type = "McRegion"
 
-     extension = 'mcr'
 
-     chunkclass = chunk.McRegionChunk
 
 
 
- class AnvilWorldFolder(_BaseWorldFolder):
 
-     """Represents a world save using the new Anvil format."""
 
-     type = "Anvil"
 
-     extension = 'mca'
 
-     chunkclass = chunk.AnvilChunk
 
 
 
- class _WorldFolderFactory(object):
 
-     """Factory class: instantiate the subclassses in order, and the first instance 
 
-     whose nonempty() method returns True is returned. If no nonempty() returns True,
 
-     a UnknownWorldFormat exception is raised."""
 
-     def __init__(self, subclasses):
 
-    self.subclasses = subclasses
 
-     def __call__(self, *args, **kwargs):
 
-    for cls in self.subclasses:
 
-       wf = cls(*args, **kwargs)
 
-       if wf.nonempty(): # Check if the world is non-empty
 
-         return wf
 
-    raise UnknownWorldFormat("Empty world or unknown format")
 
 
- WorldFolder = _WorldFolderFactory([AnvilWorldFolder, McRegionWorldFolder])
 
- """
 
- Factory instance that returns a AnvilWorldFolder or McRegionWorldFolder
 
- instance, or raise a UnknownWorldFormat.
 
- """
 
 
 
 
- class BoundingBox(object):
 
-     """A bounding box of x,y,z coordinates."""
 
-     def __init__(self, minx=None, maxx=None, miny=None, maxy=None, minz=None, maxz=None):
 
-    self.minx,self.maxx = minx, maxx
 
-    self.miny,self.maxy = miny, maxy
 
-    self.minz,self.maxz = minz, maxz
 
-     def expand(self,x,y,z):
 
-    """
 
-    Expands the bounding
 
-    """
 
-    if x != None:
 
-       if self.minx is None or x < self.minx:
 
-         self.minx = x
 
-       if self.maxx is None or x > self.maxx:
 
-         self.maxx = x
 
-    if y != None:
 
-       if self.miny is None or y < self.miny:
 
-         self.miny = y
 
-       if self.maxy is None or y > self.maxy:
 
-         self.maxy = y
 
-    if z != None:
 
-       if self.minz is None or z < self.minz:
 
-         self.minz = z
 
-       if self.maxz is None or z > self.maxz:
 
-         self.maxz = z
 
-     def lenx(self):
 
-    if self.maxx is None or self.minx is None:
 
-       return 0
 
-    return self.maxx-self.minx+1
 
-     def leny(self):
 
-    if self.maxy is None or self.miny is None:
 
-       return 0
 
-    return self.maxy-self.miny+1
 
-     def lenz(self):
 
-    if self.maxz is None or self.minz is None:
 
-       return 0
 
-    return self.maxz-self.minz+1
 
-     def __repr__(self):
 
-    return "%s(%s,%s,%s,%s,%s,%s)" % (self.__class__.__name__,self.minx,self.maxx,
 
- self.miny,self.maxy,self.minz,self.maxz)
/setup.py
代码:
- #!/usr/bin/env python
 
 
- from setuptools import setup
 
- from nbt import VERSION
 
 
- setup(
 
- name       = 'NBT',
 
- version      = ".".join(str(x) for x in VERSION),
 
- description    = 'Named Binary Tag Reader/Writer',
 
- author    = 'Thomas Woolford',
 
- author_email  = '[email protected]',
 
- url     = 'http://github.com/twoolie/NBT',
 
- license      = open("LICENSE.txt").read(),
 
- long_description = open("README.txt").read(),
 
- packages     = ['nbt'],
 
- classifiers    = [
 
-    "Development Status :: 5 - Production/Stable",
 
-    "Intended Audience :: Developers",
 
-    "License :: OSI Approved :: MIT License",
 
-    "Operating System :: OS Independent",
 
-    "Programming Language :: Python",
 
-    "Programming Language :: Python :: 2.7",
 
-    "Programming Language :: Python :: 3.3",
 
-    "Programming Language :: Python :: 3.4",
 
-    "Programming Language :: Python :: 3.5",
 
-    "Programming Language :: Python :: 3.6",
 
-    "Topic :: Games/Entertainment",
 
-    "Topic :: Software Development :: Libraries :: Python Modules"
 
- ]
 
- )