如何使用tensorflow_io的IODataset?

问题描述 投票:1回答:1

我正在尝试编写一个程序,该程序可以使用恶意pcap文件作为数据集,并预测其他pcaps文件中是否包含恶意数据包。在仔细研究过Tensorflow重复项之后,我找到了TensorIO,但是我不知道如何使用数据集创建模型并进行预测。

这是我的代码:

%tensorflow_version 2.x
import tensorflow as tf
import numpy as np
from tensorflow import keras

try:
  import tensorflow_io as tfio
  import tensorflow_datasets as tfds
except:
  !pip install tensorflow-io
  !pip install tensorflow-datasets

import tensorflow_io as tfio
import tensorflow_datasets as tfds

# print(tf.__version__)

dataset = tfio.IODataset.from_pcap("dataset.pcap")
print(dataset) # <PcapIODataset shapes: ((), ()), types: (tf.float64, tf.string)>

(使用Google Collab)

我尝试在线寻找答案,但找不到任何答案。

python tensorflow tensorflow2.0 pcap tensorflow-datasets
1个回答
0
投票

我已经下载了两个pcap文件并将其串联。后来我提取了packet_timestamp和packet_data。要求您根据需要预处理packet_data。如果要添加任何标签,则可以将其添加到训练数据集中(在下面的模型示例中,我创建了一个全零的虚拟标签并作为列添加)。如果在文件中,则可以将它们zip转换为pcap文件。 Model.fitModel.evaluate只需传递(特征,标签)对的数据集:

以下是packet_data预处理的一个示例-可以像if packet_data is valid then labels = valid else malicious一样进行修改。

%tensorflow_version 2.x
import tensorflow as tf
import tensorflow_io as tfio 
import numpy as np

# Create an IODataset from a pcap file  
first_file = tfio.IODataset.from_pcap('/content/fuzz-2006-06-26-2594.pcap')
second_file = tfio.IODataset.from_pcap(['/content/fuzz-2006-08-27-19853.pcap'])

# Concatenate the Read Files
feature = first_file.concatenate(second_file)
# List for pcap 
packet_timestamp_list = []
packet_data_list = []

# some dummy labels
labels = []

packets_total = 0
for v in feature:
    (packet_timestamp, packet_data) = v
    packet_timestamp_list.append(packet_timestamp.numpy())
    packet_data_list.append(packet_data.numpy())
    labels.append(0)
    if packets_total == 0:
        assert np.isclose(
            packet_timestamp.numpy()[0], 1084443427.311224, rtol=1e-15
        )  # we know this is the correct value in the test pcap file
        assert (
            len(packet_data.numpy()[0]) == 62
        )  # we know this is the correct packet data buffer length in the test pcap file
    packets_total += 1
assert (
    packets_total == 43
)  # we know this is the correct number of packets in the test pcap file

[下面是在模型中使用的示例-该模型将无法工作,因为我尚未处理字符串类型的packet_data。按照您的要求进行预处理,然后在模型中使用。

%tensorflow_version 2.x
import tensorflow as tf
import tensorflow_io as tfio 
import numpy as np

# Create an IODataset from a pcap file  
first_file = tfio.IODataset.from_pcap('/content/fuzz-2006-06-26-2594.pcap')
second_file = tfio.IODataset.from_pcap(['/content/fuzz-2006-08-27-19853.pcap'])

# Concatenate the Read Files
feature = first_file.concatenate(second_file)

# List for pcap 
packet_timestamp = []
packet_data = []

# some dummy labels
labels = []

# add 0 as label. You can use your actual labels here
for v in feature:
  (timestamp, data) = v
  packet_timestamp.append(timestamp.numpy())
  packet_data.append(data.numpy())
  labels.append(0)

## Do the preprocessing of packet_data here

# Add labels to the training data
# Preprocess the packet_data to convert string to meaningful value and use here
train_ds = tf.data.Dataset.from_tensor_slices(((packet_timestamp,packet_data), labels))
# Set the batch size
train_ds = train_ds.shuffle(5000).batch(32)

##### PROGRAM WILL RUN SUCCESSFULLY TILL HERE. TO USE IN THE MODEL DO THE PREPROCESSING OF PACKET DATA AS EXPLAINED ### 

# Have defined some simple model
model = tf.keras.Sequential([
  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(100),
  tf.keras.layers.Dense(10)
])

model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), 
              metrics=['accuracy'])

model.fit(train_ds, epochs=2)

希望这能回答您的问题。祝您学习愉快。

© www.soinside.com 2019 - 2024. All rights reserved.