Slavoj Žižek: 2nd half of 2014

========

Terry Eagleton reviews Trouble in Paradise and Absolute Recoil by Slavoj Žižek | Wednesday 12 November 2014 08.59 GMT | The Guardian

Posted in philosophy | Tagged | Leave a comment

Slavoj Žižek: 1st half of 2014

 

 

Posted in philosophy | Tagged | Leave a comment

Slavoj Žižek: 2nd half of 2013

Chomsky – Žižek ‘Debate’

Posted in philosophy | Tagged | Leave a comment

Slavoj Žižek: 1st half of 2013

 

Posted in philosophy | Tagged | 2 Comments

ぬぬぬ

やや、、、

やや、、、

なんともこのいいようのないむずむず感はなんなんだ。

ものを知らなくてすんませんなんだけど、中煎りより浅めなの深めなの?

それはそれとして、どういうわけか足を止めさせるなにかがある。 例えば、ものすごく凡庸、てえのは全く違和感のない言い回しと感じられる。 Tariq Ali の extreme centre も、まあ分かるわけです。 そりゃあそういう文脈のなかの言葉だから当たり前かもしれないが。 極端に中庸、とかだってそういう発話の場面は想像できますわね。 やや中庸、ですらその前になにかがあればいいわけです。 が、いきなり「やや中煎り」とくると、どっちから接近してるのかというあれ

Posted in linguistics, Uncategorized | Tagged | Leave a comment

Slavoj Žižek: 2nd half of 2012

Posted in philosophy | Tagged | Leave a comment

Slavoj Žižek: 1st half of 2012

English: Painting of Slavoj Žižek by Dirk Kola...

English: Painting of Slavoj Žižek by Dirk Kolassa. Title: Das Richtige im Falschen Deutsch: Slavoj Žižek gemalt von Dirk Kolassa. Titel: Das Richtige im Falschen (Photo credit: Wikipedia)

lll

A modest plea for enlightened catastrophism

Posted in philosophy | Tagged | Leave a comment

「ひ」と「し」

いや、江戸っ子のはなしじゃないです。

「ひいては」と言いたかったであろうときに、「しいては」といってしまうやつ。 はじめて聞いたのはどのくらい前だったか、15年くらいかしら。 まあどう言うのが正しいとかはどうでもよいのですが、なぜか妙に気持ちが悪かったのを覚えています。

「敷衍してみると」てな意味で「ひいては」と言うのは普通と感じるのですが、「しいては」てもともとなんなんだ? 言い間違いじゃなくてそんなふうな言いまわしってのはむかしからあるんだろうか? 聞いたことねえな。 ええ、「いやがってるんだから強いて(まで)はやらせんです」てのはありますが。

で、このほどひさびさに身のまわりで再現されるのを聞いてしまいました。 「しいては」と、、、 やっぱりかなりかなり気持ちが悪かったです。

で、このほどはなぜ気持ち悪いのか、ちょっと思うところがあったのでひとこと。 どうも人を強いることが好きな人の脳みそから、なんかのはずみで「しいる」のオトが滑り出てしまったんじゃないかって説。

相当無理筋くさいが、事例がいずれもぴったりだったんでメモしとく。

Mon Jul 23 02:03:47 JST 2012 改訂

lll

Posted in Uncategorized | Leave a comment

Least Square Fitting of… (1)

Affine Transformation, 2D, 4-Degrees of Freedom

(a variation of the previous theme).

1. Intro

What I did last time was about fitting of affine transformation of full 6-DoF.  4-DoF this time.  By 4-DoF, I mean uniform scaling, rotation and x, y translations.

Say scale is c, rotation angle is \theta and translations are s, t.  Then the 4-DoF transformation from (x, y) to (u, v) can be written as:

\displaystyle \begin{bmatrix} u \\ v \end{bmatrix} = \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix} \begin{bmatrix} c & 0 \\ 0 & c \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} + \begin{bmatrix} s \\ t \end{bmatrix}

\displaystyle = \begin{bmatrix} c\cos \theta & -c\sin\theta \\ c\sin\theta & c\cos\theta \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} + \begin{bmatrix} s \\ t \end{bmatrix}.

Letting a = c\cos\theta and b = c\sin\theta, the transform above can be rewritten as:

\displaystyle \begin{bmatrix} u \\ v \end{bmatrix} = \begin{bmatrix} a & -b \\ b & a \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} + \begin{bmatrix} s \\ t \end{bmatrix}.

So again my problem is to choose nice values of a, b, s, t that minimise this error function:

\displaystyle E(a, b, s, t) = \sum_{i=0}^{N-1} [(ax_i - by_i + s - u_i)^2 + (bx_i + ay_i + t - v_i)^2].

2. by Calculus

Again, in order for the error function E to have minimum value at some a, b, s, t, all partials must be 0s.  With some hand calculation (a bit tedious), this requirement leads to a system of equations as follows:

\displaystyle \sum x_i^2 a + \sum x_i s - \sum x_i u_i + \sum y_i^2 a + \sum y_i t - \sum y_i v_i = 0,

\displaystyle \sum y_i^2 b - \sum y_i s + \sum y_i u_i + \sum x_i^2 b x_i t - \sum x_i v_i = 0,

\displaystyle Ns + \sum x_i a - \sum y_i b - \sum u_i = 0,

\displaystyle Nt + \sum x_i b + \sum y_i a - \sum v_i = 0.

This system looks quite cumbersome, but is solvable. 1stly for a, b:

\displaystyle a = \dfrac{-\sum x_i \sum u_i + N \sum x_i u_i - \sum y_i \sum v_i + N \sum y_i v_i}{N \sum x_i^2 - (\sum x_i)^2 + N \sum y_i^2 - (\sum y_i)^2},

\displaystyle b = \dfrac{\sum y_i \sum u_i - N \sum y_i u_i - \sum x_i \sum v_i + N \sum x_i v_i}{N\sum y_i^2 - (\sum y_i)^2 + N \sum x_i^2 - (\sum x_i)^2}.

And once a, b are obtained, s, t are too:

\displaystyle s = (-\sum x_i a + \sum y_i b + \sum u_i)/N,

\displaystyle t = (-\sum x_i b - \sum y_i a + \sum v_i)/N.

sweat sweat sweat.  And this is ugly.

2. by Linear Algebra

Pretending I am already given nice a, b, s, t, I can write:

\displaystyle \begin{bmatrix} ax_0 - by_0 + s \\ bx_0 + ay_0 + t \\ ax_1 - by_1 + s \\ bx_1 + ay_1 + t \\ ... \end{bmatrix} \approx \begin{bmatrix} u_0 \\ v_0 \\ u_1 \\ v_1 \\ ... \end{bmatrix}.

Tweaking and rearranging the above non-equation to factor a, b, s, t out (a bit of puzzle), I can rewrite it as follows:

\displaystyle \begin{bmatrix} x_0 & -y_0 & 1 & 0 \\ y_0 & x_0 & 0 & 1 \\ x_1 & -y_1 & 1 & 0 \\ y_1 & x_1 & 0 & 1 \\ ... & ... & ... & ... \end{bmatrix} \begin{bmatrix} a \\ b \\ s \\ t \end{bmatrix} \approx \begin{bmatrix} u_0 \\ v_0 \\ u_1 \\ v_1 \end{bmatrix}.

Then I would rely on Householder Transform again to solve for a, b, s, t.  Hum.  In this way, I can successfully have my computers work instead of me working hard.  Relieved.

lll

Posted in computer programming | Tagged | Leave a comment

Least Squares Fitting of… (0)

…  Affine Transformations.

Note to myself (0).

1. problem

Let’s say here are 2 sets of 2D points \{(x_i, y_i)\} and \{(u_i, v_i)\}, i = 0, 1,...,n-1.  Besides, (x_i, y_i) corresponds to (u_i, v_i) for each i.  What I want here is a nice affine transform that maps each (x_i, y_i) to (u_i, v_i), such that sum of distances between mapped (x_i, y_i) and (u_i, v_i) is minimised.  That is an affine transform that minimise this:

\displaystyle E = \sum_{i=0}^{n-1} [(x'_i - u_i)^2 + (y'_i - v_i)^2],

where

\displaystyle \begin{bmatrix} x'_i \\ y'_i \end{bmatrix} = \begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} x_i \\ y_i \end{bmatrix} + \begin{bmatrix} s \\ t \end{bmatrix}.

2. go calculus

In order for E to take minimal value, it is necessary that E is stationary, i.e. all 1st partial derivatives of E with respect to a, b, c, d, s, t must be 0.  Writing down the necessary conditions, I get:

\displaystyle \dfrac{\partial E}{\partial a} = \dfrac{\partial E}{\partial b} = \dfrac{\partial E}{\partial c} = \dfrac{\partial E}{\partial d} = \dfrac{\partial E}{\partial s} = \dfrac{\partial E}{\partial t} = 0.

Computing partial derivatives and simplifying (a bit tedious) lead to the 2 separate systems of 1st order equations:

\displaystyle \begin{bmatrix} \sum x_i x_i & \sum x_i y_i & \sum x_i \\ \sum x_i y_i & \sum y_i y_i & \sum y_i \\ \sum x_i & \sum y_i & 1 \end{bmatrix} \begin{bmatrix} a \\ b \\ s \end{bmatrix} = \begin{bmatrix} \sum x_i u_i \\ \sum y_i u_i \\ \sum u_i \end{bmatrix},

\displaystyle \begin{bmatrix} \sum x_i x_i & \sum x_i y_i & \sum x_i \\ \sum x_i y_i & \sum y_i y_i & \sum y_i \\ \sum x_i & \sum y_i & 1 \end{bmatrix} \begin{bmatrix} c \\ d \\ t \end{bmatrix} = \begin{bmatrix} \sum x_i v_i \\ \sum y_i v_i \\ \sum v_i \end{bmatrix},

(sums are over i = 0 to n-1).

Solving for a, b, c, d, s, t gives me what I wanted, the “nice” affine transform.

3. go linear algebra

Let’s imagine an ideal situation where the mapped points \{(x'_i, y'_i)\} coincide with \{(u_i, v_i)\}, i.e.

\displaystyle \begin{bmatrix} ax_i + by_i + s \\ cx_i + dy_i + t \end{bmatrix} = \begin{bmatrix} u \\ v \end{bmatrix}

for each i.  This can be rewritten as 2 systems of equations as follows:

\displaystyle \begin{bmatrix} x_0 & y_0 & 1 \\ x_1 & y_1 & 1 \\ ... & ... & ... \\ x_{n-1} & y_{n-1} & 1 \end{bmatrix} \begin{bmatrix} a \\ b \\ s \end{bmatrix} = \begin{bmatrix} u_0 \\ u_1 \\ ... \\ u_{n-1} \end{bmatrix},

\displaystyle \begin{bmatrix} x_0 & y_0 & 1 \\ x_1 & y_1 & 1 \\ ... & ... & ... \\ x_{n-1} & y_{n-1} & 1 \end{bmatrix} \begin{bmatrix} c \\ d \\ t \end{bmatrix} = \begin{bmatrix} v_0 \\ v_1 \\ ... \\ v_{n-1} \end{bmatrix}.

Except for this ideal situation, the equations above do not hold.  I cannot equate LHS with RHS.

But the systems can give me a clue for looking at the problem from a different view point.  Namely, LHS can be seen as a linear combination of 3 column vectors which spans 3-dimensional subspace of n-dimensional space.  RHS is a vector also resides in the same n-dimensional space, but it is not, in general, on the 3-dimensional space spanned by LHS.

In the 3-dimensional space, the closest possible point to RHS is given by orthogonal projection from the RHS vector to the 3-dimensional space spanned by LHS.  This projection line must be orthogonal to 3 column vectors of LHS matrix.  Putting this condition into math expressions, I get:

\displaystyle \begin{bmatrix} x_0 & x_1 & ... & x_{n-1} \\ y_0 & y_1 & ... & y_{n-1} \\ 1 & 1 & ... & 1 \end{bmatrix} \left(\begin{bmatrix} x_0 & y_0 & 1 \\ x_1 & y_1 & 1 \\ ... & ... & ... \\ x_{n-1} & y_{n-1} & 1 \end{bmatrix} \begin{bmatrix} a \\ b \\ s \end{bmatrix} - \begin{bmatrix} u_0 \\ u_1 \\ ... \\ u_{n-1} \end{bmatrix} \right) = \boldsymbol{0},

\displaystyle \begin{bmatrix} x_0 & x_1 & ... & x_{n-1} \\ y_0 & y_1 & ... & y_{n-1} \\ 1 & 1 & ... & 1 \end{bmatrix} \left(\begin{bmatrix} x_0 & y_0 & 1 \\ x_1 & y_1 & 1 \\ ... & ... & ... \\ x_{n-1} & y_{n-1} & 1 \end{bmatrix} \begin{bmatrix} c \\ d \\ t \end{bmatrix} - \begin{bmatrix} v_0 \\ v_1 \\ ... \\ v_{n-1} \end{bmatrix} \right) = \boldsymbol{0}.

These can be rewritten as

\displaystyle \begin{bmatrix} x_0 & x_1 & ... & x_{n-1} \\ y_0 & y_1 & ... & y_{n-1} \\ 1 & 1 & ... & 1 \end{bmatrix} \begin{bmatrix} x_0 & y_0 & 1 \\ x_1 & y_1 & 1 \\ ... & ... & ... \\ x_{n-1} & y_{n-1} & 1 \end{bmatrix} \begin{bmatrix} a \\ b \\ s \end{bmatrix} = \begin{bmatrix} x_0 & x_1 & ... & x_{n-1} \\ y_0 & y_1 & ... & y_{n-1} \\ 1 & 1 & ... & 1 \end{bmatrix} \begin{bmatrix} u_0 \\ u_1 \\ ... \\ u_{n-1} \end{bmatrix},

\displaystyle \begin{bmatrix} x_0 & x_1 & ... & x_{n-1} \\ y_0 & y_1 & ... & y_{n-1} \\ 1 & 1 & ... & 1 \end{bmatrix} \begin{bmatrix} x_0 & y_0 & 1 \\ x_1 & y_1 & 1 \\ ... & ... & ... \\ x_{n-1} & y_{n-1} & 1 \end{bmatrix} \begin{bmatrix} c \\ d \\ t \end{bmatrix} = \begin{bmatrix} x_0 & x_1 & ... & x_{n-1} \\ y_0 & y_1 & ... & y_{n-1} \\ 1 & 1 & ... & 1 \end{bmatrix} \begin{bmatrix} v_0 \\ v_1 \\ ... \\ v_{n-1} \end{bmatrix}.

Computing matrix products, these equations result in the ones given in section 2.

As a computer programmer myself, a nice thing about this approach is that I can do away with some computation by hand which is a bit error prone, provided that I have a basic matrix operators and solvers at hand.  Further more, with Householder transformer at hand, I can even dispose of matrx multiplier, which is my favourite way to go actually.

lll

Posted in computer programming | Tagged | 1 Comment