Commit 2fbb0d16 by Cristina

new text file, multiple modifications

parent 494730fa
Showing with 604 additions and 66 deletions
This diff could not be displayed because it is too large.
This diff could not be displayed because it is too large.
......@@ -797,3 +797,60 @@ Nearest to make: believe,, give, render, pupil, be,
, mean, evade, spread, write,
Nearest to things: cease, find, current, on, conclusions, writers, encumbering, bright’s, solution, disposed,
Nearest to do: london, nose, parasite, 10, indifference, causes, ample, ii, blindly, Nor,
Nearest to They: discouraging, dogs, clock, deny, diagrams, party, sociable, ferule, southern, cultivation,
Nearest to may: refer, naive, tall, carefully, indestructible, inheritance, fast, west, universal, end,
Nearest to present: schoolroom, overwhelming, attic, precise, Into, lacks, tongue, traces, pages, desiring,
Nearest to which: criticise, educating, proposal, fond, forthwith, gave, parentage, november, alike, hour,
Nearest to not: bias, complication, trend, admiration, offer, bogus, refined, particularly, eyesight, beautifully,
Nearest to many: illfeeding, cared, individuals, then, comments, morbid, king, dawn, pretentiousness, shopmen,
Nearest to english: lounge, assisting, discolouration, own, inconsequence, assent, positions, opposition, operative, leg,
Nearest to and: intensely, silly, cringing, lamentable, weaker, handicapped, glaringly, innocence, could, grey,
Nearest to these: dreary, objective, rude, needs, tempts, granted, linguists, evidently, akin, sends,
Nearest to great: immaterial, government, gravel, labours, presence, appendix, palliatives, commonalty, hand, dislike,
Nearest to life: hearing, earthly, represents, slavey, evolved, apropos, justifies, worm, hospital, obstacles,
Nearest to most: clergyman, sussex, contrived, intermarriage, grounds, unit, you, groups, bathroom, stop,
Nearest to should: splendours, mood, food, aggressive, version, marbled, net, murderess, 8, uncompromising,
Nearest to much: unworked, giving, press, embitterment, honest, achieved, single, assistance, boarders, occupy,
Nearest to But: received, find, thoughtful, although, private, sciences, size, exact, potent, indebtedness,
Nearest to and: indifferent, sown, laws, 1898, development, solitary, shirk, plentiful, possibility, availably,
Nearest to way: republicans, wage, talked, observation, elizabethan, code, famine, athwart, pamphlet, measured,
Nearest to may: mischief, prevail, daughters, Surely, abject, amazingly, achieved, delay, vegetative, ceremonial,
Nearest to one: senility, explosive, impracticability, courageous, judgment, size, Prig, secluded, clotted, sufficiently,
Nearest to his: pride, battered, fastidious, end, british, coronating, agreed, looking, subordinating, overwhelming,
Nearest to which: south, practised, raiding, resolved, Upon, selfsame, essentially, at, sexually, income,
Nearest to such: idiom, shifting, vote, apt, arising, understands, episcopate, together, deny, protect,
Nearest to some: pretentiousness, astonished, varieties, unfolding, slavey, easier, cars, developing, foundation, despises,
Nearest to for: mediocre, assimilate, office, union, vigorous, exists, misled, forth, overcrowding, thirty,
Nearest to who: districts, aside, start, consistently, chance, wider, plants, greatness, types, produce,
Nearest to but: illfeeding, privacies, identical, c, intimate, handling, austere, glaringly, cause, appear,
Nearest to their: drilling, heavy, sham, step, Surround, contributing, penalties, usually, fitful, pleasures,
Nearest to present: requirements, living, countless, relative, obdurate, view, said, shameless, writers, dwarfed,
Nearest to new: future, happened, unhonoured, cunningham, births, sordidness, prematurely, thoroughly, unmake, anything,
Nearest to by: intentions, extremely, help, argument, affect, foolish, taken, improvement, sinuous, silences,
Nearest to go: groups, condition, belief, 4656676, identity, 05070423, transmission, answer, cranrprojectorgdocmanualsrdatahtml, reimbursement,
Nearest to statisticians: continues, across, pronounced, behavior, supplied, date, location, 5798488, imposed, pitfalls,
Nearest to twoway: 146, copy, replace, 22e16, identify, solution, can, least, an, somewhat,
Nearest to array: southern, soup, representing, structure, underneath, million, 05862069, moore, myfamilygenders, prime,
Nearest to 5item: execution, application, expression, chunk, dbwritetable, Max, basic, vermont, respectable, 156,
Nearest to go: apitwittercomoauthauthorize, place, jshape, nil, experiment, And, 14, contact, 170, Median,
Nearest to statisticians: separators, lamp, saving, prepared, posting, wwwstatmethodsnet, p, demonstrated, 1940s, generates,
Nearest to twoway: command, weight, engineering, majority, significance, col2, assign, plus, covered, tinydata9999999,
Nearest to array: quantile, reasoning, regular, subtly, conservative, control, why, diagnostics, decimal, platforms,
Nearest to 5item: baby, born, download, context, Address, mapreduce, sparsity, goal, enwikipediaorgwikir, learning,
Nearest to go: Based, describe, compact, bigness, dawn, 4656676, hire, insight, started, privacy,
Nearest to statisticians: desktop, dispersion, thunderstorm, Shapefile, happened, over, rectangle, ingredients, applicationvndmsexcel, geocodes,
Nearest to twoway: sufficiently, association, rsquared, Rules, quality, Data, cranrprojectorgdocmanualsrdatahtml, feedback, domain, going,
Nearest to array: illuminate, label, annually, 2073, forth, All, 400000, extremely, processed, coincidence,
Nearest to 5item: skewness, expressed, contractspecialist, extraneous, t, discernible, hint, design, platforms, popular,
This diff could not be displayed because it is too large.
This diff could not be displayed because it is too large.
......@@ -6,7 +6,7 @@ import re
import string
import codecs
filename = 'input/mankind-in-the-making.txt'
filename = 'an_introduction_to_data_science_j_stanton.txt'
sentences = []
regex = re.compile('[%s]' % re.escape(string.punctuation)) #see documentation here: http://docs.python.org/2/library/string.html
......@@ -30,6 +30,6 @@ with open(filename, 'r') as source:
string = ' '.join(string)
sentences.append(string)
with codecs.open("input/mankind-in-the-making_stripped.txt", "w", "utf-8") as destination:
with codecs.open("an_introduction_to_data_science_j_stanton.txt", "w", "utf-8") as destination:
for sentence in sentences:
destination.write(sentence.strip().capitalize()+" ")
The file could not be displayed because it is too large.
[[1240]
[ 233]
[ 969]
[ 439]
[ 233]
[ 128]
[ 439]
[ 34]
[ 128]
[1151]
[ 34]
[ 4]
[1151]
[2519]
[2082]
[ 4]
[2530]
[2519]
[2082]
[1479]
[2530]
[ 16]
[1479]
[ 3]
[ 16]
[2475]
[ 35]
[ 3]
[1940]
[2475]
[ 35]
[ 444]
[1940]
[3279]
[ 444]
[1855]
[3279]
[2976]
[2284]
[1855]
[2283]
[2976]
[ 35]
[2284]
[2283]
[2519]
[2082]
[ 35]
[2519]
[3474]
[2976]
[2082]
[2283]
[3474]
[2976]
[ 35]
[1940]
[2283]
[3015]
[ 35]
[1855]
[1940]
[3015]
[ 11]
[1855]
[ 249]
[ 7]
[ 11]
[ 249]
[1097]
[ 7]
[ 379]
[1097]
[ 1]
[ 379]
[3827]
[ 1]
[ 0]
[3827]
[ 0]
[ 0]
[ 620]
[ 0]
[2140]
[ 125]
[ 620]
[2140]
[ 15]
[ 125]
[ 777]
[ 15]
[ 4]
[ 777]
[ 594]
[2795]
[ 4]
[ 6]
[ 594]
[2795]
[2321]
[ 6]
[ 11]
[2321]
[ 109]
[ 125]
[ 11]
[ 15]
[ 109]
[ 125]
[ 777]
[ 4]
[ 15]
[ 499]
[ 777]
[ 4]
[ 22]
[3740]
[ 499]
[ 22]
[ 1]
[ 109]
[3740]
[ 1]
[ 125]
[ 380]
[ 109]
[1450]
[ 125]]
\ No newline at end of file
[ 969 969 233 233 439 439 128 128 34 34 1151 1151 4 4 2519
2519 2082 2082 2530 2530 1479 1479 16 16 3 3 2475 2475 35 35
1940 1940 444 444 3279 3279 1855 1855 2976 2976 2284 2284 2283 2283 35
35 2519 2519 2082 2082 3474 3474 2976 2976 2283 2283 35 35 1940 1940
3015 3015 1855 1855 11 11 249 249 7 7 1097 1097 379 379 1
1 3827 3827 0 0 0 0 620 620 2140 2140 125 125 15 15
777 777 4 4 594 594 2795 2795 6 6 2321 2321 11 11 109
109 125 125 15 15 777 777 4 4 499 499 22 22 3740 3740
1 1 109 109 125 125 380 380]
\ No newline at end of file
Tensor("Placeholder_1:0", shape=(128, 1), dtype=int32)
\ No newline at end of file
......@@ -33,31 +33,31 @@ import tensorflow as tf
# ***********************************************************************************
# Step 1: Download data.
# ***********************************************************************************
url = 'http://mattmahoney.net/dc/'
def maybe_download(filename, expected_bytes):
"""
Step 1: Download data.
"""
if not os.path.exists(filename):
filename, _ = urllib.request.urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', filename)
else:
print(statinfo.st_size)
raise Exception('Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
# url = 'http://mattmahoney.net/dc/'
# def maybe_download(filename, expected_bytes):
# """
# Step 1: Download data.
# """
# if not os.path.exists(filename):
# filename, _ = urllib.request.urlretrieve(url + filename, filename)
# statinfo = os.stat(filename)
# if statinfo.st_size == expected_bytes:
# print('Found and verified', filename)
# else:
# print(statinfo.st_size)
# raise Exception('Failed to verify ' + filename + '. Can you get to it with a browser?')
# return filename
# filename = maybe_download('text8.zip', 31344016)
def read_data(filename):
"""
Read the data into a list of strings. Extract the first file enclosed in a zip file as a list of words
"""
with zipfile.ZipFile(filename) as f:
data = tf.compat.as_str(f.read(f.namelist()[0])).split()
return data
# def read_data(filename):
# """
# Read the data into a list of strings. Extract the first file enclosed in a zip file as a list of words
# """
# with zipfile.ZipFile(filename) as f:
# data = tf.compat.as_str(f.read(f.namelist()[0])).split()
# return data
# read_data(filename)
......@@ -65,7 +65,7 @@ def read_data(filename):
# ALGOLIT step 1: read data from plain text file
# ***********************************************************************************
filename = 'input/mankind-in-the-making_stripped.txt'
filename = 'input/an_introduction_to_data_science_j_stanton.txt'
words = []
def read_input_text(filename):
......@@ -86,7 +86,7 @@ read_input_text(filename)
# Step 2: Create a dictionary and replace rare words with UNK token.
# ***********************************************************************************
vocabulary_size = 5000
vocabulary_size = 5000
def build_dataset(words):
"""
......@@ -101,7 +101,7 @@ def build_dataset(words):
# >>> printing the counting the words, output is a Counter({"word", "1234"}) object, where 1234 is the amount of times a word appears
# print('collections.Counter(words)', collections.Counter(words))
# >>> printing a selection of a chunk of the words, in the size of the vocabulary_size
# print('collections.Counter(words).most_common(vocabulary_size - 1) >>>', collections.Counter(words).most_common(vocabulary_size - 1))
......@@ -110,13 +110,13 @@ def build_dataset(words):
# >>> print the extended count list
# print('count >>> ',count)
# create dictionary of most common words + index number
# frequency value is ignored, the number sets the order
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
# print('dictionary', dictionary)
data = list()
......@@ -124,7 +124,7 @@ def build_dataset(words):
# counting how many words from the input file are disregarded
unk_count = 0
# if word of text is in (vocabulary_size) most common words,
# if word of text is in (vocabulary_size) most common words,
# it is translated into the index number of that word
for word in words:
if word in dictionary:
......@@ -135,7 +135,7 @@ def build_dataset(words):
# printing the excluded words
# print(word)
data.append(index)
# >>> print list of all words, connected to their index number. Unk words are connected to index number 0
......@@ -203,7 +203,7 @@ def generate_batch(batch_size, num_skips, skip_window):
# test to see if variables are set well, 'ensure data integrity'
# % is a modulo, performing a division with a integer and outputs what is left
# for example: 7 modulo 3 = 1 / 7 % 3 = 1
# for example: 7 modulo 3 = 1 / 7 % 3 = 1
assert batch_size % num_skips == 0
assert num_skips <= 2 * skip_window
......@@ -228,7 +228,7 @@ def generate_batch(batch_size, num_skips, skip_window):
# check if data_index + 1 is the same as length, if so, data_index is set to 0
# 2 % 4 = 2 because >>> 0 * 4 + 2 = 2
data_index = (data_index + 1) % len(data)
# after the last iteration, data_index = 3
# now buffer = [indexnumber1, indexnumber2, indexnumber3]
......@@ -243,13 +243,13 @@ def generate_batch(batch_size, num_skips, skip_window):
# first you check if target is still in the targets_to_avoid list
# it loops untill it is not the same value as target anymore
# example: target = 1
# example: target = 1
while target in targets_to_avoid:
# first iteration is always true
# target is reset to a random ? (why random?) (and why do this?)
target = random.randint(0, span - 1)
target = random.randint(0, span - 1)
# only break the loop if the number is not in the loop yet
# add the number to the list, if it is not in the targets_to_avoid yet!
targets_to_avoid.append(target)
......@@ -266,7 +266,7 @@ def generate_batch(batch_size, num_skips, skip_window):
# and why is +1 done on data_index again?
data_index = (data_index + 1) % len(data)
return batch, labels
# creation of batch with word + left/right window word
......@@ -290,7 +290,7 @@ batch, labels = generate_batch(batch_size=8, num_skips=2, skip_window=1)
Step 4: Build and train a skip-gram model.
"""
batch_size = 128
embedding_size = 300 # Dimension of the embedding vector.
embedding_size = 20 # Dimension of the embedding vector.
skip_window = 1 # How many words to consider left and right.
num_skips = 2 # How many times to reuse an input to generate a label.
......@@ -318,7 +318,7 @@ num_sampled = valid_window - embedding_size - valid_size # Number of negativ
# the sections in quotes come from the Tensorflow tutorial:
# https://www.tensorflow.org/versions/r0.11/tutorials/word2vec/
# suggestion from hans: work with Tensorboard, to generate graphs of the process of the nodes
# suggestion from hans: work with Tensorboard, to generate graphs of the process of the nodes
# (tf.graph can make outputs within the process)
with graph.as_default():
......@@ -349,7 +349,7 @@ with graph.as_default():
# **************
# *** NODE 2 ***
# **************
# what does this do?
# what does this do?
# it follows the batch_size (currently 128)
# embed = 128 x 20
embed = tf.nn.embedding_lookup(embeddings, train_inputs)
......@@ -431,7 +431,7 @@ with graph.as_default():
# ***********************************************************************************
# Step 5: Begin training.
# ***********************************************************************************
np.set_printoptions(threshold=np.inf) # allows to print full arrays / prevents printing truncated representations of arrays
np.set_printoptions(threshold=np.inf) # allows to print full arrays / prevents printing truncated representations of arrays
"""
Step 5: Begin training.
......@@ -461,7 +461,7 @@ with tf.Session(graph=graph) as session:
# The average loss is an estimate of the loss over the last 2000 batches.
print("Average loss at step ", step, ": ", average_loss)
average_loss = 0
# Note that this is expensive (~20% slowdown if computed every 500 steps)
if step % 10000 == 0:
print ("\n")
......@@ -530,4 +530,4 @@ except ImportError:
# print generate_batch.__doc__
# print prepare_model.__doc__
# print start_training.__doc__
# print plot_with_labels.__doc__
\ No newline at end of file
# print plot_with_labels.__doc__
......@@ -30,10 +30,10 @@ from six.moves import urllib
from six.moves import xrange # pylint: disable=redefined-builtin
import tensorflow as tf
# Algolit settings:
# Algolit settings:
# select training text from the input folder:
trainingtext = 'mankind-in-the-making_stripped.txt'
trainingtext = 'an_introduction_to_data_science_j_stanton.txt'
# Algolit adaption:
print('TensorFlow version:', tf.__version__)
......@@ -55,7 +55,7 @@ def export(filename, data):
# def maybe_download(filename, expected_bytes):
# """
# Step 1: Download data.
# Step 1: Download data.
# """
# if not os.path.exists(filename):
# filename, _ = urllib.request.urlretrieve(url + filename, filename)
......@@ -125,7 +125,7 @@ def build_dataset(words):
# >>> printing the number of unique words
# print(len(collections.Counter(words)))
# result: 7259
# Algolit adaption
export('step-2-collections.Counter(words)_'+str(len(collections.Counter(words)))+'-unique-words.txt', collections.Counter(words))
......@@ -137,19 +137,19 @@ def build_dataset(words):
# >>> print the extended count list
# print('count >>> ',count)
# create dictionary of most common words + index number
# frequency value is ignored, the number sets the order
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
# print('dictionary', dictionary)
# counting how many words from the input file are disregarded
unk_count = 0
# if word of text is in (vocabulary_size) most common words,
# if word of text is in (vocabulary_size) most common words,
# it is translated into the index number of that word
data = []
disregarded = []
......@@ -222,7 +222,7 @@ def generate_batch(batch_size, num_skips, skip_window):
# test to see if variables are set well, 'ensure data integrity'
# % is a modulo, performing a division with a integer and outputs what is left
# for example: 7 modulo 3 = 1 / 7 % 3 = 1
# for example: 7 modulo 3 = 1 / 7 % 3 = 1
assert batch_size % num_skips == 0
assert num_skips <= 2 * skip_window
......@@ -252,7 +252,7 @@ def generate_batch(batch_size, num_skips, skip_window):
# check if data_index + 1 is the same as length, if so, data_index is set to 0
# 2 % 4 = 2 because >>> 0 * 4 + 2 = 2
data_index = (data_index + 1) % len(data)
# after the last iteration, data_index = 3
# now buffer = [indexnumber1, indexnumber2, indexnumber3]
......@@ -267,13 +267,13 @@ def generate_batch(batch_size, num_skips, skip_window):
# first you check if target is still in the targets_to_avoid list
# it loops untill it is not the same value as target anymore
# example: target = 1
# example: target = 1
while target in targets_to_avoid:
# first iteration is always true
# target is reset to a random ? (why random?) (and why do this?)
target = random.randint(0, span - 1)
target = random.randint(0, span - 1)
# only break the loop if the number is not in the loop yet
# add the number to the list, if it is not in the targets_to_avoid yet!
targets_to_avoid.append(target)
......@@ -290,7 +290,7 @@ def generate_batch(batch_size, num_skips, skip_window):
# and why is +1 done on data_index again?
data_index = (data_index + 1) % len(data)
return batch, labels
# creation of batch with word + left/right window word
......@@ -326,7 +326,7 @@ export('step-3-example-training-batch_batch-size-128_num-skips-2_skip-window-1.t
Step 4: Build and train a skip-gram model.
"""
batch_size = 128
embedding_size = 20 # Dimension of the embedding vector.
embedding_size = 20 # Dimension of the embedding vector.
skip_window = 1 # How many words to consider left and right.
num_skips = 2 # How many times to reuse an input to generate a label.
......@@ -388,7 +388,7 @@ with graph.as_default():
# **************
# *** NODE 2 ***
# **************
# what does this do?
# what does this do?
# it follows the batch_size (currently 128)
# embed = 128 x 20
embed = tf.nn.embedding_lookup(embeddings, train_inputs)
......@@ -426,7 +426,7 @@ with graph.as_default():
# time we evaluate the loss.
# reduce_mean > creates an average of all the inputs
loss = tf.reduce_mean(
tf.nn.nce_loss(nce_weights, nce_biases, embed, train_labels,
tf.nn.nce_loss(nce_weights, nce_biases, embed, train_labels,
num_sampled, vocabulary_size))
print('node 5 (loss)', loss)
......@@ -437,7 +437,7 @@ with graph.as_default():
# **************
# Construct the SGD optimizer using a learning rate of 1.0.
optimizer = tf.train.GradientDescentOptimizer(1.0).minimize(loss)
print('node 6 (optimizer):', optimizer)
# >>> print optimizer
......@@ -466,7 +466,7 @@ with graph.as_default():
print('node 9 (valid_embeddings):', valid_embeddings)
# >>> print valid_embeddings
# **************
# *** NODE10 ***
# **************
......@@ -475,7 +475,7 @@ with graph.as_default():
print('node 10 (similarity):', similarity)
# >>> print similarity
# **************
# *** NODE11 ***
# **************
......@@ -484,11 +484,11 @@ with graph.as_default():
print('node 11 (init):', init)
# >>> print init
# ***********************************************************************************
# Step 5: Begin training.
# ***********************************************************************************
np.set_printoptions(threshold=np.inf) # allows to print full arrays / prevents printing truncated representations of arrays
np.set_printoptions(threshold=np.inf) # allows to print full arrays / prevents printing truncated representations of arrays
"""
Step 5: Begin training.
"""
......@@ -519,7 +519,7 @@ with tf.Session(graph=graph) as session:
# The average loss is an estimate of the loss over the last 2000 batches.
print("Average loss at step ", step, ": ", average_loss)
average_loss = 0
# Note that this is expensive (~20% slowdown if computed every 500 steps)
if step % 10000 == 0:
print ("\n")
......
This diff could not be displayed because it is too large.
TensorFlow version: 0.12.1
Data size: 77377
*exported step-1-algolit_data-size-77377-words.txt*
*exported step-2-collections.Counter(words)_6646-unique-words.txt*
*exported step-2-disregarded-words_1647.txt*
*exported step-2-count.txt*
*exported step-2-dictionary.txt*
*exported step-2-reverse_dictionary.txt*
*exported step-2-data_len(data)-is-77377-words.txt*
*exported step-2-reversed-training-text_77377-words.txt*
*exported step-3-example-training-batch.txt*
*exported step-3-example-training-batch-labels.txt*
*exported step-3-example-training-batch_batch-size-128_num-skips-2_skip-window-1.txt*
*exported step-4-initiated-array_train-inputs.txt*
*exported step-4-initiated-array_train_labels.txt*
*exported step-4-initiated-array_valid_dataset.txt*
node 1 (embeddings): Tensor("Variable/read:0", shape=(5000, 20), dtype=float32)
node 2 (embed): Tensor("embedding_lookup:0", shape=(128, 20), dtype=float32)
node 3 (nce_weights): Tensor("Variable_1/read:0", shape=(5000, 20), dtype=float32)
node 4 (nce_biases): Tensor("Variable_2/read:0", shape=(5000,), dtype=float32)
node 5 (loss) Tensor("Mean:0", shape=(), dtype=float32)
node 6 (optimizer): name: "GradientDescent"
op: "NoOp"
input: "^GradientDescent/update_Variable/ScatterSub"
input: "^GradientDescent/update_Variable_1/ScatterSub"
input: "^GradientDescent/update_Variable_2/ScatterSub"
node 7 (norm): Tensor("Sqrt:0", shape=(5000, 1), dtype=float32)
node 8 (normalized_embeddings) Tensor("truediv:0", shape=(5000, 20), dtype=float32)
node 9 (valid_embeddings): Tensor("embedding_lookup_1:0", shape=(5, 20), dtype=float32)
node 10 (similarity): Tensor("MatMul_1:0", shape=(5, 5000), dtype=float32)
node 11 (init): name: "init"
op: "NoOp"
input: "^Variable/Assign"
input: "^Variable_1/Assign"
input: "^Variable_2/Assign"
Step 5: Begin training.
Initialized
Average loss at step 0 : 207.835418701
Nearest to go: Based, describe, compact, bigness, dawn, 4656676, hire, insight, started, privacy,
Nearest to statisticians: desktop, dispersion, thunderstorm, Shapefile, happened, over, rectangle, ingredients, applicationvndmsexcel, geocodes,
Nearest to twoway: sufficiently, association, rsquared, Rules, quality, Data, cranrprojectorgdocmanualsrdatahtml, feedback, domain, going,
Nearest to array: illuminate, label, annually, 2073, forth, All, 400000, extremely, processed, coincidence,
Nearest to 5item: skewness, expressed, contractspecialist, extraneous, t, discernible, hint, design, platforms, popular,
*logfile.txt written*
Please install sklearn, matplotlib, and scipy to visualize embeddings.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or sign in to comment