...
 
Commits (3)
......@@ -6,15 +6,54 @@
<meta name="generator" content="pandoc" />
<title></title>
<style type="text/css">code{white-space: pre;}</style>
<style type="text/css">
div.sourceCode { overflow-x: auto; }
table.sourceCode, tr.sourceCode, td.lineNumbers, td.sourceCode {
margin: 0; padding: 0; vertical-align: baseline; border: none; }
table.sourceCode { width: 100%; line-height: 100%; }
td.lineNumbers { text-align: right; padding-right: 4px; padding-left: 4px; color: #aaaaaa; border-right: 1px solid #aaaaaa; }
td.sourceCode { padding-left: 5px; }
code > span.kw { color: #007020; font-weight: bold; } /* Keyword */
code > span.dt { color: #902000; } /* DataType */
code > span.dv { color: #40a070; } /* DecVal */
code > span.bn { color: #40a070; } /* BaseN */
code > span.fl { color: #40a070; } /* Float */
code > span.ch { color: #4070a0; } /* Char */
code > span.st { color: #4070a0; } /* String */
code > span.co { color: #60a0b0; font-style: italic; } /* Comment */
code > span.ot { color: #007020; } /* Other */
code > span.al { color: #ff0000; font-weight: bold; } /* Alert */
code > span.fu { color: #06287e; } /* Function */
code > span.er { color: #ff0000; font-weight: bold; } /* Error */
code > span.wa { color: #60a0b0; font-weight: bold; font-style: italic; } /* Warning */
code > span.cn { color: #880000; } /* Constant */
code > span.sc { color: #4070a0; } /* SpecialChar */
code > span.vs { color: #4070a0; } /* VerbatimString */
code > span.ss { color: #bb6688; } /* SpecialString */
code > span.im { } /* Import */
code > span.va { color: #19177c; } /* Variable */
code > span.cf { color: #007020; font-weight: bold; } /* ControlFlow */
code > span.op { color: #666666; } /* Operator */
code > span.bu { } /* BuiltIn */
code > span.ex { } /* Extension */
code > span.pp { color: #bc7a00; } /* Preprocessor */
code > span.at { color: #7d9029; } /* Attribute */
code > span.do { color: #ba2121; font-style: italic; } /* Documentation */
code > span.an { color: #60a0b0; font-weight: bold; font-style: italic; } /* Annotation */
code > span.cv { color: #60a0b0; font-weight: bold; font-style: italic; } /* CommentVar */
code > span.in { color: #60a0b0; font-weight: bold; font-style: italic; } /* Information */
</style>
</head>
<body>
<hr />
<p>/ /_____ ____ / / /_ ____ _ <strong> / </strong>/ __ / __ / / __ / __ | |/<em>/ / /</em>/ /<em>/ / /</em>/ / / /<em>/ / /</em>/ /&gt; &lt;<br />
_<em>/_</em><strong>/_</strong><em>/</em>/<em>.</em><strong>/_</strong><em>/</em>/|_|</p>
<p>your personal image processing tools!</p>
<pre><code> __ ____
/ /_____ ____ / / /_ ____ _ __
/ __/ __ \/ __ \/ / __ \/ __ \| |/_/
/ /_/ /_/ / /_/ / / /_/ / /_/ /&gt; &lt;
\__/\____/\____/_/_.___/\____/_/|_| </code></pre>
<p>your personal image processing toolset!</p>
<h2 id="installing-building">Installing / Building</h2>
<p>This toolset is heterogeneous; some are written in C with bindings to libraries like opencv and tesseract, others are in the python scripting language and rely on python packages like numpy, python-opencv, pillow, and scipy.</p>
<h3 id="installing-python-dependencies">Installing Python dependencies</h3>
<h3 id="python">Python</h3>
<p>For the python tools, it may be useful to create a &quot;virtualenv&quot;. Otherwise you might eventually have trouble with version conflicts with other tools. Option A describes this approach. Alternatively, you can install the packages in your system (option B).</p>
<h4 id="a.-installing-python-dependencies-in-a-venv">A. Installing Python dependencies in a venv</h4>
<ol style="list-style-type: decimal">
......@@ -27,6 +66,16 @@ _<em>/_</em><strong>/_</strong><em>/</em>/<em>.</em><strong>/_</strong><em>/</em
</ol>
<h4 id="b.-installing-python-dependencies-to-the-system">B. Installing Python dependencies to the system</h4>
<pre><code>sudo pip install scipy pillow numpy</code></pre>
<h3 id="c">C</h3>
<p>Some of the C-based tools like <em>contours</em> and <em>channel</em> have a simple makefile and build simply with:</p>
<div class="sourceCode"><pre class="sourceCode bash"><code class="sourceCode bash"><span class="bu">cd</span> contours
<span class="fu">make</span></code></pre></div>
<p>Other tools like <em>texture</em> and <em>houghlines</em> use cmake. Basically, if there is a file called &quot;CMakeLists.txt&quot; the build process uses cmake. An example of building texture would be:</p>
<div class="sourceCode"><pre class="sourceCode bash"><code class="sourceCode bash"><span class="bu">cd</span> texture
<span class="fu">mkdir</span> build
<span class="bu">cd</span> build
<span class="fu">cmake</span> ..
<span class="fu">make</span></code></pre></div>
<h2 id="usage">Usage</h2>
<h3 id="image-gradient">Image Gradient</h3>
<pre><code>python python/imagegradient.py foo.png --output foo.gradient.png --svg foo.gradient.svg</code></pre>
......@@ -38,5 +87,9 @@ _<em>/_</em><strong>/_</strong><em>/</em>/<em>.</em><strong>/_</strong><em>/</em
<li>Viewer: Layer modes (such as masking ?!) ... and eventually export to other formats (SVG with image layers!)</li>
<li>Viewer: ?! Represent entire stacking as an SVG (to begin with!!!)</li>
</ul>
<h3 id="nov-24-2016">Nov 24 2016</h3>
<p>Incorporate orderings functions: * Way to talk of &quot;items&quot; and collect metadata in a single JSON. * Scripts to generate (leaflet) maps based on metadata. * Cross linkages with other visualisations (such as the map)</p>
<p>What's more complex though is that now each layer is based on a resolution which may (or may not) change the resulting order. So the orderings are now PER ZOOM LEVEL (which is interesting).</p>
<p>ALSO need to make a scraper for RMAH -- do it <em>promiscuously</em>?!</p>
</body>
</html>
```
__ ____
/ /_____ ____ / / /_ ____ _ __
/ __/ __ \/ __ \/ / __ \/ __ \| |/_/
/ /_/ /_/ / /_/ / / /_/ / /_/ /> <
\__/\____/\____/_/_.___/\____/_/|_|
your personal image processing tools!
```
your personal image processing toolset!
Installing / Building
......@@ -13,7 +14,7 @@ Installing / Building
This toolset is heterogeneous; some are written in C with bindings to libraries like opencv and tesseract, others are in the python scripting language and rely on python packages like numpy, python-opencv, pillow, and scipy.
### Installing Python dependencies
### Python
For the python tools, it may be useful to create a "virtualenv". Otherwise you might eventually have trouble with version conflicts with other tools. Option A describes this approach. Alternatively, you can install the packages in your system (option B).
......@@ -37,6 +38,25 @@ For the python tools, it may be useful to create a "virtualenv". Otherwise you m
sudo pip install scipy pillow numpy
### C
Some of the C-based tools like *contours* and *channel* have a simple makefile and build simply with:
```bash
cd contours
make
```
Other tools like *texture* and *houghlines* use cmake. Basically, if there is a file called "CMakeLists.txt" the build process uses cmake. An example of building texture would be:
```bash
cd texture
mkdir build
cd build
cmake ..
make
```
Usage
-----------
......@@ -54,3 +74,15 @@ Notes
* Viewer: Add SVG layer support (technically possible ?! via img tag)... yes! using iframe ipv img
* Viewer: Layer modes (such as masking ?!) ... and eventually export to other formats (SVG with image layers!)
* Viewer: ?! Represent entire stacking as an SVG (to begin with!!!)
### Nov 24 2016
Incorporate orderings functions:
* Way to talk of "items" and collect metadata in a single JSON.
* Scripts to generate (leaflet) maps based on metadata.
* Cross linkages with other visualisations (such as the map)
What's more complex though is that now each layer is based on a resolution which may (or may not) change the resulting order. So the orderings are now PER ZOOM LEVEL (which is interesting).
ALSO need to make a scraper for RMAH -- do it *promiscuously*?!
from argparse import ArgumentParser
import sys, cv2, json
from numpy import zeros
p = ArgumentParser()
p.add_argument("src1")
p.add_argument("src2")
p.add_argument("--shift", type=int, default=0)
p.add_argument("--output", default="output.avi")
p.add_argument("--framerate", type=float, default=25, help="output frame rate")
p.add_argument("--frames", type=int, default=None, help="output only so many frames")
p.add_argument("--data1", default="faces.json", help="json face data for src1")
p.add_argument("--data2", default="faces.json", help="json face data for src1")
p.add_argument("--fourcc", default="XVID", help="MJPG,mp4v,XVID")
args = p.parse_args()
try:
fourcc = cv2.cv.CV_FOURCC(*args.fourcc)
except AttributeError:
fourcc = cv2.VideoWriter_fourcc(*args.fourcc)
from source import Source
src1 = Source(args.src1, args.data1)
src2 = Source(args.src2, args.data2)
src1.unparse()
src2.unparse()
out = cv2.VideoWriter()
out.open(args.output, fourcc, args.framerate, (src1.width, src1.height))
count = 0
newfaces = src2.faces(with_frame=True)
for i in range(args.shift):
frameno, frame, face = newfaces.next()
for frameno, frame, faces, eyes in src1.frames(only_with_features=True):
print "output frame {0}".format(count+1)
for face in faces:
x, y, w, h = face
_, newframe, newface = newfaces.next()
nx, ny, nw, nh = newface
# resize newface to fit oldface
newfaceimg = newframe[ny:ny+nh, nx:nx+nw]
newfaceimg = cv2.resize(newfaceimg, (w, h))
frame[y:y+h, x:x+w] = newfaceimg[0:h, 0:w]
out.write(frame)
count += 1
if args.frames and count >= args.frames:
break
out.release()
from argparse import ArgumentParser
import sys, cv2, json
from numpy import zeros
p = ArgumentParser()
p.add_argument("src")
p.add_argument("--output", default="output.avi")
p.add_argument("--framerate", type=float, default=25, help="output frame rate")
p.add_argument("--frames", type=int, default=None, help="output only so many frames")
p.add_argument("--data", default="faces.json", help="json face data for src1")
p.add_argument("--fourcc", default="XVID", help="MJPG,mp4v,XVID")
p.add_argument("--shift", type=int, default=1)
# p.add_argument("--xscale", default="624/720")
# p.add_argument("--yscale", default="480/540")
p.add_argument("--xscale", default="1.0")
p.add_argument("--yscale", default="1.0")
args = p.parse_args()
try:
fourcc = cv2.cv.CV_FOURCC(*args.fourcc)
except AttributeError:
fourcc = cv2.VideoWriter_fourcc(*args.fourcc)
from source import Source
src1 = Source(args.src, args.data)
src1.unparse()
out = cv2.VideoWriter()
out.open(args.output, fourcc, args.framerate, (src1.width, src1.height))
def parseScale (v):
if v:
if "/" in v:
n,d = v.split("/", 1)
return float(n) / float(d)
else:
return float(v)
else:
return 1.0
xscale = parseScale(args.xscale)
yscale = parseScale(args.yscale)
print xscale, yscale
count = 0
for frameno, frame, face in src1.faces(with_frame=True):
print "output frame {0}".format(count+1)
# out.write(frame)
# faceimg = frame[face[0]:face[1], face[0]+face[2]:face[1]+face[3]]
x, y, w, h = face
x = int(x*xscale)
y = int(y*yscale)
w = int(w*xscale)
h = int(h*yscale)
faceimg = frame[y:y+h, x:x+w]
# faceimg = frame[x:x+w, y:y+h]
# nf = cv2.resize(faceimg, (src1.width, src1.height))
# out.write(nf)
# newframe = zeros(frame.shape, "uint8")
# newframe[y:y+h, x:x+w] = faceimg
## BLACK OUT FACES
frame[y:y+h, x:x+w] = zeros((h,w,3), "uint8")
out.write(frame)
count += 1
if args.frames and count >= args.frames:
break
out.release()
# move face by face through src1, replacing with faces from
# cut out
from argparse import ArgumentParser
import sys, cv2, json
from numpy import zeros
p = ArgumentParser()
p.add_argument("src")
p.add_argument("--output", default="output.avi")
p.add_argument("--framerate", type=float, default=25, help="output frame rate")
p.add_argument("--frames", type=int, default=None, help="output only so many frames")
p.add_argument("--data", default="faces.json", help="json face data for src1")
p.add_argument("--fourcc", default="XVID", help="MJPG,mp4v,XVID")
p.add_argument("--shift", type=int, default=1)
args = p.parse_args()
try:
fourcc = cv2.cv.CV_FOURCC(*args.fourcc)
except AttributeError:
fourcc = cv2.VideoWriter_fourcc(*args.fourcc)
from source import Source
src = Source(args.src, args.data)
src.unparse()
out = cv2.VideoWriter()
out.open(args.output, fourcc, args.framerate, (src.width, src.height))
count = 0
for frameno, frame, faces, eyes in src.frames(only_with_features=True):
print "output frame {0}".format(count+1)
for x, y, w, h in faces:
frame[y:y+h, x:x+w] = zeros((h,w,3), "uint8")
out.write(frame)
count += 1
if args.frames and count >= args.frames:
break
out.release()
from argparse import ArgumentParser
import sys, cv2, json
p = ArgumentParser("")
p.add_argument("input")
p.add_argument("--output", default="output.avi")
p.add_argument("--fourcc", default="XVID", help="MJPG,mp4v,XVID")
args = p.parse_args()
fdata = []
for line in sys.stdin:
line = line.strip()
data = json.loads(line)
fdata.append(data)
if line.startswith("---"):
break
finalframe = fdata[-1]["frameno"]
cap = cv2.VideoCapture(args.input)
num_frames = int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT))
width = int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT))
print ((width, height), num_frames)
# try:
fourcc = cv2.cv.CV_FOURCC(*args.fourcc)
#fourcc = cv2.cv.CV_FOURCC(*'MJPG')
# else:
# fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter()
out.open(args.output,fourcc, 25, (width, height))
frameno = 0
while(cap.isOpened()):
sys.stderr.write("\r{0}...".format(frameno))
sys.stderr.flush()
ret, frame = cap.read()
if fdata and fdata[0]["frameno"] == frameno:
print ("output frame", frameno)
data = fdata[0]
fdata = fdata[1:]
for x,y,w,h in data["faces"]:
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,0,255),3)
for x,y,w,h in data["eyes"]:
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,0),3)
out.write(frame)
frameno += 1
if frameno >= finalframe:
break
out.release()
cap.release()
#!/usr/bin/env python
from __future__ import print_function
import numpy as np
import cv2, os, json, sys
from argparse import ArgumentParser
p = ArgumentParser("")
p.add_argument("input")
p.add_argument("--cascades", default="./haarcascades", help="path of cascades")
p.add_argument("--frames", type=int, default=0, help="number of frames to process, default: 0 (all)")
p.add_argument("--skip", type=int, default=None, help="skip frames")
# p.add_argument("--camera", type=int, default=0, help="camera number, default: 0")
args = p.parse_args()
tpath = os.path.expanduser(args.cascades)
face_cascade = cv2.CascadeClassifier(os.path.join(tpath, 'haarcascade_frontalface_default.xml'))
eye_cascade = cv2.CascadeClassifier(os.path.join(tpath, 'haarcascade_eye.xml'))
def process_image (img):
ret = False
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# faces = face_cascade.detectMultiScale(gray, 1.3, 5)
faces = face_cascade.detectMultiScale(gray)
ar_faces=[]
ar_eyes=[]
for (x,y,w,h) in faces:
#print "face", (x, y, w, h)
ar_faces.append([int(x), int(y), int(w), int(h)])
#print type(x)
ret = True
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
#print img
roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]
eyes = eye_cascade.detectMultiScale(roi_gray)
for (ex,ey,ew,eh) in eyes:
cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)
aex=int(ex)+int(x)
aey=int(ey)+int(y)
ar_eyes.append([aex,aey,int(ew),int(eh)]);
#print 'eye'
return ret, ar_faces, ar_eyes
cap = cv2.VideoCapture(args.input)
frameno = 0
# dt = datetime.datetime.now()
out = {}
ff = out["frames"] = []
while(True):
# Capture frame-by-frame
sys.stderr.write("\r{0}...".format(frameno))
sys.stderr.flush()
ret, frame = cap.read()
if args.skip and frameno < args.skip:
frameno += 1
continue
d, faces, eyes = process_image(frame)
if d:
print (frameno, faces, file=sys.stderr)
data = {"frameno": frameno, "faces": faces, "eyes": eyes}
ff.append(data)
print (json.dumps(data))
sys.stdout.flush()
# tstamp = dt.strftime("%Y%m%d_%H%M%S")
# outpath = "frame{0}_{1:04d}.jpg".format(tstamp, frameno)
# outpath = os.path.join(args.path, outpath)
# cv2.imwrite(outpath, frame)
# sys.stdout.write(outpath+"\n")
# sys.stdout.flush()
frameno += 1
if args.frames and frameno >= args.frames:
break
cap.release()
print ("\n---\n")
print (json.dumps(out))
from argparse import ArgumentParser
import sys, cv2, json
from numpy import zeros
p = ArgumentParser()
p.add_argument("src")
p.add_argument("--output", default="output.avi")
p.add_argument("--framerate", type=float, default=25, help="output frame rate")
p.add_argument("--frames", type=int, default=None, help="output only so many frames")
p.add_argument("--data", default="faces.json", help="json face data for src1")
p.add_argument("--fourcc", default="XVID", help="MJPG,mp4v,XVID")
args = p.parse_args()
try:
fourcc = cv2.cv.CV_FOURCC(*args.fourcc)
except AttributeError:
fourcc = cv2.VideoWriter_fourcc(*args.fourcc)
from source import Source
src = Source(args.src, args.data)
src.unparse()
out = cv2.VideoWriter()
out.open(args.output, fourcc, args.framerate, (src.width, src.height))
count = 0
for frameno, frame, faces, eyes in src.frames(only_with_features=True):
print "output frame {0}".format(count+1)
newframe = zeros(frame.shape, "uint8")
for face in faces:
x, y, w, h = face
newframe[y:y+h, x:x+w] = frame[y:y+h, x:x+w]
out.write(newframe)
count += 1
if args.frames and count >= args.frames:
break
out.release()
# move face by face through src1, replacing with faces from
# cut out
import json, cv2
try:
FRAME_COUNT = cv2.CAP_PROP_FRAME_COUNT
FRAME_WIDTH = cv2.CAP_PROP_FRAME_WIDTH
FRAME_HEIGHT = cv2.CAP_PROP_FRAME_HEIGHT
except AttributeError:
FRAME_COUNT = cv2.cv.CV_CAP_PROP_FRAME_COUNT
FRAME_WIDTH = cv2.cv.CV_CAP_PROP_FRAME_WIDTH
FRAME_HEIGHT = cv2.cv.CV_CAP_PROP_FRAME_HEIGHT
class Source ():
def __init__(self, src, datasrc,init=True):
self.src = src
self.datasrc = datasrc
if init:
self.data = self.readdata(datasrc)
self.numfaceframes = len(self.data)
self.read_size()
def readdata (self, src):
ret = []
with open(src) as f:
for line in f:
line = line.strip()
if line and not line.startswith("#"):
data = json.loads(line)
ret.append(data)
if line.startswith("---"):
break
return ret
def read_size (self):
cap = cv2.VideoCapture(self.src)
self.numframes = int(cap.get(FRAME_COUNT)) # this values seems to be too low
self.width = int(cap.get(FRAME_WIDTH))
self.height = int(cap.get(FRAME_HEIGHT))
cap.release()
def close (self):
self.cap.release()
def unparse (self):
print "Movie with {0} frames sized {1}x{2}, {3} frames ({4:.1f}%) have faces".format(
self.numframes, self.width, self.height, len(self.data), 100.0*(len(self.data)/float(self.numframes)))
def features (self, feature_name="faces", with_frame=False):
if with_frame:
cap = cv2.VideoCapture(self.src)
frameno = 0
dataindex = 0
while True:
if with_frame:
ret, frame = cap.read()
if dataindex < len(self.data):
# print dataindex, self.numfaceframes
data = self.data[dataindex]
if frameno == data["frameno"]:
for f in data["faces"]:
if with_frame:
yield (frameno, frame, f)
else:
yield (frameno, f)
dataindex += 1
else:
break
frameno += 1
if with_frame:
cap.release()
def faces (self, with_frame=False):
return self.features("faces", with_frame=with_frame)
def eyes (self, with_frame=False):
return self.features("eyes", with_frame=with_frame)
# def frames_with_features (self):
# cap = cv2.VideoCapture(self.src)
# frameno = 0
# dataindex = 0
# while True:
# ret, frame = cap.read()
# if dataindex < len(self.data):
# data = self.data[dataindex]
# if frameno == data["frameno"]:
# yield (frameno, frame, data["face"], data["eyes"])
# dataindex += 1
# frameno += 1
# cap.release()
def frames (self, only_with_features=False):
cap = cv2.VideoCapture(self.src)
frameno = 0
dataindex = 0
while True:
ret, frame = cap.read()
faces, eyes = None, None
if dataindex < len(self.data):
data = self.data[dataindex]
if frameno == data["frameno"]:
faces = data["faces"]
eyes = data["eyes"]
dataindex += 1
if only_with_features:
if faces:
yield (frameno, frame, faces, eyes)
else:
yield (frameno, frame, faces, eyes)
frameno += 1
cap.release()
from argparse import ArgumentParser
import sys, cv2, json
from numpy import zeros
p = ArgumentParser()
p.add_argument("src")
p.add_argument("--shift", type=int, default=1)
p.add_argument("--output", default="output.avi")
p.add_argument("--framerate", type=float, default=25, help="output frame rate")
p.add_argument("--frames", type=int, default=None, help="output only so many frames")
p.add_argument("--data", default="faces.json", help="json face data for src1")
p.add_argument("--fourcc", default="XVID", help="MJPG,mp4v,XVID")
args = p.parse_args()
try:
fourcc = cv2.cv.CV_FOURCC(*args.fourcc)
except AttributeError:
fourcc = cv2.VideoWriter_fourcc(*args.fourcc)
from source import Source
src = Source(args.src, args.data)
src.unparse()
out = cv2.VideoWriter()
out.open(args.output, fourcc, args.framerate, (src.width, src.height))
count = 0
newfaces = src.faces(with_frame=True)
for i in range(args.shift):
frameno, frame, face = newfaces.next()
for frameno, frame, faces, eyes in src.frames(only_with_features=True):
print "output frame {0}".format(count+1)
for face in faces:
x, y, w, h = face
_, newframe, newface = newfaces.next()
nx, ny, nw, nh = newface
# resize newface to fit oldface
newfaceimg = newframe[ny:ny+nh, nx:nx+nw]
newfaceimg = cv2.resize(newfaceimg, (w, h))
frame[y:y+h, x:x+w] = newfaceimg[0:h, 0:w]
out.write(frame)
count += 1
if args.frames and count >= args.frames:
break
out.release()
#!/usr/bin/env python
from argparse import ArgumentParser
from flat import rgb, font, shape, strike, document, text
import struct
from csv import DictReader
def decode_hexcolor (p):
"#FFFFFF => (255, 255, 255)"
if p.startswith("#"):
p = p.lstrip("#")
return struct.unpack('BBB',p.decode('hex'))
DEFAULT_BODY_FONT = '/var/www/vhosts/automatist.net/deptofreading/fonts/NimbusRomD.ttf'
DEFAULT_TEXT_COLOR = '#000000'
DEFAULT_TEXT_SIZE = 10.0
DEFAULT_TEXT_LEADING = 12.0
DEFAULT_LABEL_FONT = '/var/www/vhosts/automatist.net/deptofreading/fonts/NimbusSansD.ttf'
DEFAULT_LABEL_SIZE = 8
DEFAULT_LABEL_LEADING = 10.5
DEFAULT_FORMAT = "a4"
DEFAULT_LABEL = "Das Projekt einer Zeitschrift\nAbs. Am Flutgraben 3, 12435 Berlin"
DEFAULT_ADDRESSES = "/var/www/vhosts/automatist.net/deptofreading/addresses.csv"
DEFAULT_OUTPUT = "zeitschrift.pdf"
class Zeitschrift (object):
def __init__(self):
self.bodyfont = DEFAULT_BODY_FONT
self.textcolor = DEFAULT_TEXT_COLOR
self.textsize = DEFAULT_TEXT_SIZE
self.textleading = DEFAULT_TEXT_LEADING
self.labelfont = DEFAULT_LABEL_FONT
self.labelsize = DEFAULT_LABEL_SIZE
self.labelleading = DEFAULT_LABEL_LEADING
self.format = DEFAULT_FORMAT
self.label = DEFAULT_LABEL
self.addresses = DEFAULT_ADDRESSES
self.from_address = 1
self.to_address = 1
self.output = DEFAULT_OUTPUT
self.textsrc = None
self.text = None
def make (self):
zeitschrift(self)
def zeitschrift (args):
textcolor = rgb(*decode_hexcolor(args.textcolor))
bodyfont = font.open(args.bodyfont)
labelfont = font.open(args.labelfont)
# box = shape().stroke(textcolor).width(0.5)
# headline = strike(font).color(black).size(20, 24)
body = strike(bodyfont).color(textcolor).size(args.textsize, args.textleading)
labelf = strike(labelfont).color(textcolor).size(args.labelsize, args.labelleading)
if args.format == "a3":
doc = document(400, 277, 'mm') # A3
column_width = 90
column_height = 267.5
elif args.format == "a4":
doc = document(200, 277, 'mm') # A4: 210 x 297 trimmed 10mm
column_width = 90
column_height = 267.5
#############################
# BODY
#############################
# each line is paragraph
# insert blanks between each
if args.text:
src = args.text.splitlines()
else:
src = open(args.textsrc).readlines()
src = [x.decode("utf-8") for x in src]
paras = []
for p in src:
paras.append(body.paragraph(p))
paras.append(body.paragraph(''))
story = text(*paras)
#############################
# LABEL
#############################
# label = open(args.label).readlines()
label = args.label.splitlines()
label = [x.strip() for x in label if x.strip()]
label = [labelf.paragraph(x) for x in label]
label = text(*label)
#from csv import DictReader
#r = DictReader(open(args.addresses))
addresses = []
if type(args.addresses) == list:
addresses = args.addresses
else:
# process csv
for n, row in enumerate(DictReader(open(args.addresses))):
if (n+1) >= args.from_address and (n+1) <= args.to_address:
addresses.append([row['Name'], row['Address1'], row['Address2'], row['Address3']])
for address in addresses:
label_doc = document(column_height/2, column_width, 'mm')
label_page = label_doc.addpage()
label_page.place(label).frame(5, 5, column_height/2-10, column_width-10)
# label_page.place(box.rect(5, 5, 90, 90))
# label_doc.pdf("label.pdf")
address = text(*[labelf.paragraph(x) for x in address])
label_page.place(address).frame(69.25, 50, 100, 100)
label_image = label_page.image(ppi=180, kind='rgb')
label_image.rotate(False)
# block = page.place(story).frame(p(5), p(5), p(90), p(267.5))
done = False
block = None
page = doc.addpage()
left = 5
# block = page.place(story).frame(left, 5, column_width, column_height)
block = page.place(story).frame(left, 5 + (column_height/2), column_width, (column_height/2))
page.place(label_image).frame(0, 0, 90, column_height/2)
#page.place(box.rect(5, 5, 90, column_height/2))
columns = 4
if args.format == "a4":
columns = 2
while block.overflow():
for i in range(columns-1):
left += 100
block = page.chain(block).frame(left, 5, column_width, column_height)
if not block.overflow():
break
if block.overflow():
left = 5
page = doc.addpage()
block = page.chain(block).frame(left, 5, column_width, column_height)
# p.image(kind='rgb').png('hello.png')
# d.page(0).svg('hello.svg')
doc.pdf(args.output)
if __name__ == "__main__":
parser = ArgumentParser(description='Make a Department of Reading PDF Zeitschrift from text source')
parser.add_argument('--bodyfont', default=DEFAULT_BODY_FONT, help='body font')
parser.add_argument('--textcolor', default=DEFAULT_TEXT_COLOR, help='Text color')
parser.add_argument('--textsize', type=int, default=DEFAULT_TEXT_SIZE, help='Text size (mm?)')
parser.add_argument('--textleading', type=int, default=DEFAULT_TEXT_LEADING, help='Text leading (mm?)')
parser.add_argument('--labelfont', default=DEFAULT_LABEL_FONT, help='label font')
parser.add_argument('--labelsize', type=float, default=DEFAULT_LABEL_SIZE, help='Label font size (mm?)')
parser.add_argument('--labelleading', type=float, default=DEFAULT_LABEL_LEADING, help='Label font leading (mm?)')
parser.add_argument('--format', default=DEFAULT_FORMAT, help='Format (a3, a4)')
parser.add_argument('--label', default=DEFAULT_LABEL, help='File with return label (in lines)')
parser.add_argument('--addresses', default=DEFAULT_ADDRESSES, help='File with addresses')
parser.add_argument('--from-address', type=int, default=1, help='Start with address number (default: 1)')
parser.add_argument('--to-address', type=int, default=1, help='End with address number (default: 1)')
parser.add_argument('-o', '--output', default=DEFAULT_OUTPUT, help='Output file')
parser.add_argument('--textsrc', default="zeitschrift.txt", help='Text source (filename)')
parser.add_argument('--text', help='Text')
parser.add_argument('--wikiurl', help='Text source (wikisrc)')
args = parser.parse_args()
if args.wikiurl:
baseurl, sectionid = args.wikiurl.split("#")
import html5lib, urllib2, sys
from xml.etree import ElementTree as ET
import requests
# f = urllib2.urlopen(baseurl)
# src = f.read().decode("latin-1")
src = requests.get(baseurl).text
t = html5lib.parse(src, namespaceHTMLElements=False)
div = t.find(".//div[@id='{0}']".format(sectionid))
_text = ET.tostring(div, method="text", encoding="utf-8").decode("utf-8")
_text = u"\n".join([x.strip() for x in _text.splitlines() if x.strip()])
args.text = _text
zeitschrift(args)
......@@ -43,37 +43,37 @@ bin/texture:
# contours
%.contours.svg: %.png
contours/contours $< > $@
bin/contours $< > $@
%.contours.svg: %.jpg
contours/contours $< > $@
bin/contours $< > $@
# texture (& lexicality, textline, para, block
%.texture.svg: %.png
texture/build/texture $< $*.texture.svg $*.lexicality.svg $*.textline.svg $*.para.svg $*.block.svg
bin/texture $< $*.texture.svg $*.lexicality.svg $*.textline.svg $*.para.svg $*.block.svg
%.texture.svg: %.jpg
texture/build/texture $< $*.texture.svg $*.lexicality.svg $*.textline.svg $*.para.svg $*.block.svg
bin/texture $< $*.texture.svg $*.lexicality.svg $*.textline.svg $*.para.svg $*.block.svg
# channel
%.red.png: %.png
channel/channel $< $*.red.png $*.green.png $*.blue.png
bin/channel $< $*.red.png $*.green.png $*.blue.png
%.red.png: %.jpg
channel/channel $< $*.red.png $*.green.png $*.blue.png
bin/channel $< $*.red.png $*.green.png $*.blue.png
%.green.png: %.png
channel/channel $< $*.red.png $*.green.png $*.blue.png
bin/channel $< $*.red.png $*.green.png $*.blue.png
%.green.png: %.jpg
channel/channel $< $*.red.png $*.green.png $*.blue.png
bin/channel $< $*.red.png $*.green.png $*.blue.png
%.blue.png: %.png
channel/channel $< $*.red.png $*.green.png $*.blue.png
bin/channel $< $*.red.png $*.green.png $*.blue.png
%.blue.png: %.jpg
channel/channel $< $*.red.png $*.green.png $*.blue.png
bin/channel $< $*.red.png $*.green.png $*.blue.png
# houghlines
# Z3.hough.png <= Z3.png
# Z3.hough.X0Y0.png <= Z3.hough.png
%.hough.svg: %.jpg
houghlines/build/houghlines $< > $@
bin/houghlines $< > $@
%.hough.svg: %.png
houghlines/build/houghlines $< > $@
bin/houghlines $< > $@
# gradient: NEEDS WORK
# %.gradient.png: %.jpg
......@@ -88,14 +88,17 @@ bin/texture:
################################
# VIEWER rules
# pyramid rules
TILESIZE=128
.PRECIOUS: %/Z0.png %/Z1.png %/Z2.png %/Z3.png %/Z4.png %/Z5.png
%/Z0.png: %.png
mkdir -p $*
python viewer/pyramid.py $< --output "$*/Z{z}.png"
python viewer/pyramid.py $< --tilesize $(TILESIZE) --output "$*/Z{z}.png"
%/Z0.png: %.jpg
mkdir -p $*
python viewer/pyramid.py $< --output "$*/Z{z}.png"
python viewer/pyramid.py $< --tilesize $(TILESIZE) --output "$*/Z{z}.png"
# SVG to png
%.png: %.svg
......@@ -103,19 +106,21 @@ bin/texture:
# tile rule
%.X0Y0.png : %.png
python viewer/tile.py $< --output "$*.X{x}Y{y}.png"
python viewer/tile.py $< --tilesize $(TILESIZE) --output "$*.X{x}Y{y}.png"
# make the map
%.html: %.png
mkdir -p $*
python viewer/makeviewer.py \
--tilesize $(TILESIZE) \
--template viewer/map.html \
--output $@ $<
%.html: %.jpg
mkdir -p $*
python viewer/makeviewer.py \
--tilesize $(TILESIZE) \
--template viewer/map.html \
--output $@ $<
%/error.jpg:
convert -size 256x256 canvas:red $@
convert -size $(TILESIZE)x$(TILESIZE) canvas:red $@
......@@ -36,7 +36,8 @@ var map = L.map('map', {
// originals/2016-11-09/Z0.hough.X0Y0.png
var pixels = L.tileLayer("{{basename}}/Z{z}.X{x}Y{y}.png", {
continuousWorld: true,
errorTileUrl: "error.jpg",
tileSize: {{tilesize}},
errorTileUrl: "/lib/error.jpg",
bounds: [[bounds.lat+1, 1], [-1, bounds.lng-1]]
}).addTo(map),
border = L.rectangle([[bounds.lat,0],[0,bounds.lng]], {color: "#aaaaaa", fill: false, weight: 1, pane: 'tilePane'}),
......@@ -46,54 +47,64 @@ var pixels = L.tileLayer("{{basename}}/Z{z}.X{x}Y{y}.png", {
var red = L.tileLayer("{{basename}}/Z{z}.red.X{x}Y{y}.png", {
continuousWorld: true,
errorTileUrl: "error.jpg",
tileSize: {{tilesize}},
errorTileUrl: "/lib/error.jpg",
bounds: [[bounds.lat+1, 1], [-1, bounds.lng-1]]
});
var green = L.tileLayer("{{basename}}/Z{z}.green.X{x}Y{y}.png", {
continuousWorld: true,
errorTileUrl: "error.jpg",
tileSize: {{tilesize}},
errorTileUrl: "/lib/error.jpg",
bounds: [[bounds.lat+1, 1], [-1, bounds.lng-1]]
});
var blue = L.tileLayer("{{basename}}/Z{z}.blue.X{x}Y{y}.png", {
continuousWorld: true,
errorTileUrl: "error.jpg",
tileSize: {{tilesize}},
errorTileUrl: "/lib/error.jpg",
bounds: [[bounds.lat+1, 1], [-1, bounds.lng-1]]
});
var contours = L.tileLayer("{{basename}}/Z{z}.contours.X{x}Y{y}.png", {
continuousWorld: true,
errorTileUrl: "error.jpg",
tileSize: {{tilesize}},
errorTileUrl: "/lib/error.jpg",
bounds: [[bounds.lat+1, 1], [-1, bounds.lng-1]]
}),
hough = L.tileLayer("{{basename}}/Z{z}.hough.X{x}Y{y}.png", {
continuousWorld: true,
errorTileUrl: "error.jpg",
tileSize: {{tilesize}},
errorTileUrl: "/lib/error.jpg",
bounds: [[bounds.lat+1, 1], [-1, bounds.lng-1]]
}),
texture = L.tileLayer("{{basename}}/Z{z}.texture.X{x}Y{y}.png", {
continuousWorld: true,
errorTileUrl: "error.jpg",
tileSize: {{tilesize}},
errorTileUrl: "/lib/error.jpg",
bounds: [[bounds.lat+1, 1], [-1, bounds.lng-1]]
}),
lexicality = L.tileLayer("{{basename}}/Z{z}.lexicality.X{x}Y{y}.png", {
continuousWorld: true,
errorTileUrl: "error.jpg",
tileSize: {{tilesize}},
errorTileUrl: "/lib/error.jpg",
bounds: [[bounds.lat+1, 1], [-1, bounds.lng-1]]
}),
textline = L.tileLayer("{{basename}}/Z{z}.textline.X{x}Y{y}.png", {
continuousWorld: true,
errorTileUrl: "error.jpg",
tileSize: {{tilesize}},
errorTileUrl: "/lib/error.jpg",
bounds: [[bounds.lat+1, 1], [-1, bounds.lng-1]]
}),
para = L.tileLayer("{{basename}}/Z{z}.para.X{x}Y{y}.png", {
continuousWorld: true,
errorTileUrl: "error.jpg",
tileSize: {{tilesize}},
errorTileUrl: "/lib/error.jpg",
bounds: [[bounds.lat+1, 1], [-1, bounds.lng-1]]
}),
block = L.tileLayer("{{basename}}/Z{z}.block.X{x}Y{y}.png", {
continuousWorld: true,
errorTileUrl: "error.jpg",
tileSize: {{tilesize}},
errorTileUrl: "/lib/error.jpg",
bounds: [[bounds.lat+1, 1], [-1, bounds.lng-1]]
});
......