我真的很困惑
codecs.open function
。当我这样做时:
file = codecs.open("temp", "w", "utf-8")
file.write(codecs.BOM_UTF8)
file.close()
它给了我错误
UnicodeDecodeError:“ascii”编解码器无法在位置解码字节 0xef 0:序数不在范围内(128)
如果我这样做:
file = open("temp", "w")
file.write(codecs.BOM_UTF8)
file.close()
效果很好。
问题是为什么第一种方法失败?我该如何插入bom?
如果第二种方法是正确的做法,那么使用
codecs.open(filename, "w", "utf-8")
有什么意义?
codecs.BOM_UTF8
是字节字符串,而不是 Unicode 字符串。我怀疑文件处理程序试图根据“我打算将 Unicode 编写为 UTF-8 编码文本,但你给了我一个字节字符串!”来猜测你的真正意思。
尝试直接编写字节顺序标记的 Unicode 字符串(即 Unicode U+FEFF),以便文件将其编码为 UTF-8:
import codecs
file = codecs.open("lol", "w", "utf-8")
file.write(u'\ufeff')
file.close()
(这似乎给出了正确的答案 - 一个包含字节 EF BB BF 的文件。)
编辑:S. Lott 的
建议使用“utf-8-sig”作为编码比自己明确编写 BOM 更好,但我将在这里留下这个答案,因为它解释了之前出了什么问题。
不需要任何库。
with open('text.txt', 'w', encoding='utf-8') as f:
f.write(text)
Unicode 问题,Python 解释器可以提供更多见解。
Jon Skeet 关于 模块的看法是正确的(不寻常)——它包含字节字符串:>>> import codecs
>>> codecs.BOM
'\xff\xfe'
>>> codecs.BOM_UTF8
'\xef\xbb\xbf'
>>>
选择另一个尼特,
BOM
有一个标准的Unicode名称,可以输入为:
>>> bom= u"\N{ZERO WIDTH NO-BREAK SPACE}"
>>> bom
u'\ufeff'
也可以通过访问:
>>> import unicodedata
>>> unicodedata.lookup('ZERO WIDTH NO-BREAK SPACE')
u'\ufeff'
>>>
# -*- encoding: utf-8 -*-
# converting a unknown formatting file in utf-8
import codecs
import commands
file_location = "jumper.sub"
file_encoding = commands.getoutput('file -b --mime-encoding %s' % file_location)
file_stream = codecs.open(file_location, 'r', file_encoding)
file_output = codecs.open(file_location+"b", 'w', 'utf-8')
for l in file_stream:
file_output.write(l)
file_stream.close()
file_output.close()
pathlib:
import pathlib
pathlib.Path("text.txt").write_text(text, encoding='utf-8') #or utf-8-sig for BOM
def read_files(file_path):
with open(file_path, encoding='utf8') as f:
text = f.read()
return text
**OR (AND)**
def read_files(text, file_path):
with open(file_path, 'rb') as f:
f.write(text.encode('utf8', 'ignore'))
**OR**
document = Document()
document.add_heading(file_path.name, 0)
file_path.read_text(encoding='UTF-8'))
file_content = file_path.read_text(encoding='UTF-8')
document.add_paragraph(file_content)
**OR**
def read_text_from_file(cale_fisier):
text = cale_fisier.read_text(encoding='UTF-8')
print("what I read: ", text)
return text # return written text
def save_text_into_file(cale_fisier, text):
f = open(cale_fisier, "w", encoding = 'utf-8') # open file
print("Ce am scris: ", text)
f.write(text) # write the content to the file
**OR**
def read_text_from_file(file_path):
with open(file_path, encoding='utf8', errors='ignore') as f:
text = f.read()
return text # return written text
**OR**
def write_to_file(text, file_path):
with open(file_path, 'wb') as f:
f.write(text.encode('utf8', 'ignore')) # write the content to the file
import os
import re
import chardet
def read_text_from_file(file_path):
with open(file_path, 'rb') as f:
raw_data = f.read()
try:
# Încercăm să decodăm ca UTF-8, ignorând erorile
return raw_data.decode('utf-8', errors='ignore')
except UnicodeDecodeError:
pass
# Dacă UTF-8 eșuează, încercăm detectarea automată a codificării
encoding = chardet.detect(raw_data)['encoding']
if encoding is not None:
try:
return raw_data.decode(encoding, errors='ignore')
except UnicodeDecodeError:
pass
raise Exception(f"Eroare: Nu s-a putut decodifica fișierul {file_path} nici cu UTF-8, nici cu codificarea detectată.")
def write_to_file(text, file_path, encoding='utf8'):
"""
Aceasta functie scrie un text intr-un fisier.
text: textul pe care vrei sa il scrii
file_path: calea catre fisierul in care vrei sa scrii
"""
with open(file_path, 'wb') as f:
f.write(text.encode(encoding, 'ignore'))
pd.to_excel("somefile.xlsx", sheet_name="export", encoding='utf-8')
我相信这适用于大多数国际角色。