YAPC::Europe: Wednesday
it's just my notes, modified for
obvious spelling errors fixes and URLs for the interesting bits. It may
contains errors. I'll post proper and scoped articles later.
AntiSocial Perl (Damian
Conway)
rod logic
Rod::Logic
(unfortunately not in CPAN :-( ;-)
Quantum
mechanics + Special relativity
dirac equation
final
diagram of Feynman
positron travels back in time
positronic
program
Positronic::Variables (unfortunately not
in CPAN :-( ;-)
Deutsch's CTC (closed
time-like curves)
Test::Harness 3.0
(Curtis Poe)
TAP::Parser
will become T::H 3.0
dev release next week
TAP
Version 13 or 14 of TAP
TAP version 1,
January 30 1988
July 8 1996, version 5, all non
tap ignored
Bail out!
v13: understands TAP
version syntax
- TAP Parsers
runtests
gets this right. prove does not
Test::Harness
issues
v difficult to upgrade TAP
difficult to
provide alternative view
confused with incorrect test counts
difficult
to track down skip and todo
multi language tests in suite
difficult
why not refactor T::H?
->
20 years of cruft
-> several people have tried and
failed
-> dangerous to break the tool chain
design
goals:
backward compatible
runs on perl 5.005
non
non-core modules
runs everywhere T::H does
MVC
no
bugs
support new TAP versions
support
multiple languages test using drivers program
todo:
*improve
coverage (btw, theres a bug in Devel::Cover)
*optimize
(optimized runtests catching up with prove but return so much more
information)
future plans:
parallel
test runs
GUI and HTML views
improved diagnostics
via a yaml subset
repeatable shuffles
runtime env
description
who's using it
Yahoo!
(tagging of the tests)
xmms2 (multi languages tests)
Smolder
(run locally, display remotely)
problems with Test::TAP::HTMLMatrix
(internals is yaml, not xml, no good for document which test reports
are)
Automated Testing of
Open Source software (Gabor Szabo)
Gabor
Szabo
CPAN::Forum
test
automation
QA day:
- TAP
Automation in OSS <- subject of the talk
Business
value
- reduce feedback cycle
- continuous builds
automated smoke (regression) tests
- report
generation - overview
- current status
drill down to see where did something break
accountability
companies VS open source
limited
budget for QA - no paid QA people
market pressure releasing
buggy soft = release often, release soon
open
source:
test locally, report remotely
security
consideration by downloading software
perl 5 development:
Perforce
RT
rsync
to get source
commit msg in mailing list
TAP
Smoke
(C compiler, Working perl, Test::Smoke)
db.test-smoke.org
(not updated any more)
www.test-smoke.org
centralization
or decentralization of smoke testing
perl 5:
easy participation
- Parrot testing
multi
language testing (perl, PASM,PIR)
smoke: use TAP
and Test::TAP::HTMLMatrix
(will be replaced by Smolder)
-
pugs
subversion
and SVK
Needs
(Glasgow Haskell Compiler), Perl and Test::TAP::HTMLMatrix -
CPAN
easier is : CPAN + CPAN::Reporter
SQLite
CVS, tests written in C and TCL
very good coverage (98%)
no automated smoke testing
CVS HEAD is currently
broken
- NUT - Network UPS tool
use BuildBot for automated build
no automated test!
need the device to be tested
the system might shut down during test
- Ruby
use subversion
unit tests written in Ruby
rubinius has separate test suite
no automated smoke testing
- PGSQL
test suite: home grown perl scripts
long and frightening list on how to setup ... but is easy
need registration
How to
find vulnerabilities in perl code (mock)
10k
modules on CPAN
500k from lang:perl on google code search
anatomy
of a vulnerability
user manipulatable
causes harm
usually
found in the boundaries between systems
(perl/sql, perl/web,
perl/fs, perl string/unicode)
sql injection
xss
Flash
cross-domain-policy
lang:perl
open\s+[A-Z0-9]+,\s*".*$
gives > 19k results
lang:perl
(SELECT|DELETE).FROM.=\s*'?[$@]
methodology:
find
harm and also find something to manipulate
you can
manipulate:
content (taint mode protect against this)
structure
race
conditions (difficult to find and rarely manipulatable)
predictable
state
data leakage
any variable in a
template is potentially a XSS
stompy
- a tool to detect bad prngs
http://lcamtuf.coredump.cx/stompy.tgz
SideJacking
- is your session encrypted, or just your login
http://www.erratasec.com/sidejacking.zip
/
Fuzzing
PeachFuzz
http://peachfuzz.sourceforge.net
Follow
the data flow from user manipulatable input to causing harm
don't
forget XS
Introduction
to Moose (Stevan Little)
use Moose
imports:
- keywords has, extends, with, before,
after, around, super, override, inner, augment
use strict and use warnings
- Carp::Confess and
Scalar::Util::Blessed
no moose ; 1;
pseudo
typing for perl5 -> its actually a validator
->meta
returns meta class
metaclass defines the class
metaclass
is itself an instance of a metaclass
its for
*
introspecting
- modify classes (add/remove method,
add/remove attributes) - programmatically create
classes
attribute delegation
type
constraints unions
type coercions
*
create subtype
- add coerce attribute
use coerce to precisely coerce (what and how) data
Benefits
of Moose
-
code is less tedious
-
no need to worry about basic mechanics of OO likes
-
object initialization
-
object destruction
-
attribute storage, access and initialization
-
less tedium means many typo errors are all but eliminated
-
code is shorter
-
Moose declarative style allows you say more with less
-
less code == less bugs
-
less low-level testing needed
-
no need to verify things which are covered by Moose test suite (3k
tests) -
code becomes more descriptive (code is
documentation)
Drawbacks:
-
has fairly heavy compile time cost
-
not good for non-persistent environments
-
looking to use .pmc to reduce this burden
-
some Moose features are slow at times
-
speed is directly proportional to the amount of features used
-
Extending non-hash based classes is tricky
-
e.g: IO::* (use Class::InsideOut
or Object::InsideOut
or use delegation)
Matt Trout is hacking the
lexer to lift some subroutines from compile time to runtime ( or the
other way round, cant remember what he said)
Role
system is very inefficient at the moment
Kwalitee
(Xavier Caron)
definition attempt:
*
approx of "Quality"
-
confidence
-
through passing tests, but thats not enough
-
but correlation exists if there is functional test coverage
-
bug = diff between expectation and implementation
-
bug = diff between test, documentation and code
-
you tend to the goad, but you wont reach it
-
ages before
-
literature
-
CPAN
-
articles, conferences,
-
Read, learn, evolve
-
before
-
generate skeleton
-
write tests ( a tad of XP)
-
while
-
after
-
test
-
measure pod coverage
-
measure tests code coverage
-
measure func test coverage
-
generate synthetic reports
-
way after (release)
"Always code as if the guy who ends up maintaining your code will be a
violent psychopath who knows where you live" Damian Conway
"Thus, programs must be written for people to read, and only
incidentally for machines to execute."
-
Pre requisites:
-
version control
-
version control standards
-
coding standards
-
ticket tracker
-
text editor or IDE
-
do not reinvent the wheel - avoid repeating others errors
-
use CPAN
"I code in CPAN, the rest is syntax." - Audrey Tang
programmers triptych
pod (hubris)
tests (laziness)
code (impatience)
At the beginning
file tree structure
Use a dedicated CPAN module
Module::Starter ( or Module::Starter::PBP)
Testing for dummies
test = confront intention * implementation
using techniques (directed or constrained random test)
and a reference model (OK ~ no <> vs reference)
TDD
test
suite ~ executable specification
"old tests
don't die, they just become non-regression tests!" chromatic &
Michael G Schwen
tester:
"is this
correct?"
"Am I finished?"
code
coverage <> functional coverage
how
do I measure functional coverage in perl?
HDVL
there is SystemVerilog
for perl: Test::LectroTest
TAP
skip:
because external factor
todo: not yet implement
CPANTS
define
kwalitee
metrics (13)
assertions
"dead
programs tell no lies" Hunt and Thomas, Pragmatic programmer
most
test are directed
an alternative is "constrained
random testing"
let the machine do the dirty job instead
(pseudo) randomly (like in hardware testing)
-> use
Test::ElectroTest module
-> stick a type to
each function parameter
-> add constraints
to parameters (i.e restrain to subsets)
refactor
early, refactor often
(on feature branches)
there
is technique and there is commitment
"At that
time [1909] the chief engineer was almost always the chief test pilot
as well. That had the fortunate result of eliminating poor engineering
early in aviation." igor sikorsky
High
Order Parsing in perl (Mark Jason Dominus)
Parsing
= unstructured -> data structure
closed
vs open system
open system
+
flexible, powerful, unlimited
_ require more
understanding
Parse::RecDescent
is a really excellent closed system
open system : HOP::Parser
example:
web app where user input is math function
we want a graph
out of it
easy solution: use eval to run user input
into compiled perl code
cangowrong:
-
input is "rm -rf"
-
in perl ^ means bitwise exclude but not exponentiation
-
...
alternative: implement an evaluator for expression
input: string
- output: compiled code or abstract
syntax tree or specialized data structure or expression object or ..
structure
of an expression -> grammars
expression
-> "(" expression ")" | term ("+" expression | nothing)
term
-> factor ("*" term | nothing)
factor
-> atom ("^" NUMBER | nothing)
atom
-> NUMBER (argh!, something's missing here)
lexing
idea:
preprocess the input
humans do this when they read
-
first, turn the seq of char into a sequence of words
-
then try to understand the struct of the sentence based on meanings
of words -
this is called lexing
lexing: is mostly matter
of pattern matching
perl actually has special
regex features just for this purpose
tokens
sub
type{}
sub value{}
recursive descent
parsing
idea: each grammar rule becomes a
function
parsers
easy
one: nothing
others: parsers for a specific token