Nov 23, 2011 - MySQL has become the world's most popular open source database .... source PHP/MySQL development environment, utilizing the minimum of ...

0 downloads 4 Views 422KB Size

Computation of generalized inverses using P hp/M ySql environment Milan B. Tasi´ c∗, Predrag S. Stanimirovi´ c, Selver H. Pepi´ c University of Niˇ s, Faculty of Science and Mathematics, Viˇ segradska 33, 18000 Niˇ s, Serbia E-mail: [email protected], [email protected],

p [email protected]

Abstract The main aim of this paper is to develop a client/server based model for computing the weighted Moore-Penrose inverse using the partitioning method as well as for storage of generated results. The web application is developed in the P HP /M ySQL environment. The source code is open and free for testing by using a Web browser. Influence of different matrix representations and storage systems on the computational time is investigated. The CPU time for searching the previously stored pseudo-inverses is compared with the CPU time spent for new computation of the same inverses. AMS Subj. Class.: 15A09, 68P15, 68N15. Key words: PHP script, MySQL database, Weighted Moore-Penrose inverse, Matrix storage formats.

1

Introduction

Since the mid-1990s, there has been a surge of interest among the academics and practitioners in an open source software (OSS). There are many successful projects in OSS community, primarily the M ozilla web browser, the Linux operating system, the Apache web server, and to a lesser extent, P HP [12] and the P erl programming languages, as well as the M ySQL database. OSS has drawn the attention of users and developers because of its economic benefits [10]. P HP is an open source software, and it is free for downloading and use. Main characteristics of P HP are described in [12]. M ySQL has become the world’s most popular open source database system because of its consistent fast performance, high reliability and ease of usage. Besides the fact that it is free, M ySQL offers a wide range of possibilities [5, 27]. SQL is the standard language used for querying and analysis of data in a relational DBM S [4]. Unfortunately, SQL has no vector and matrix computations. Interesting code for matrix SQL operations can be found in [16]. The SQL constructions and the SQL primitives for data mining are proposed in [1]. However, these constructions do not offer adequate and flexible tools for matrix manipulation. A graphical SQL query generator and query operators with embedded matrix objects are disclosed in [3]. Implementation of some vector and matrix operations based on programming User-Defined Functions (U DF s) is studied in [15]. U DF s represent a C programming interface that allows the definition of scalar and aggregate functions that can be used in SQL. U DF s have several advantages and limitations. ∗ Corresponding

author

1

2

M.B. Tasi´c, P.S. Stanimirovi´c, S.H. Pepi´c,

An U DF allows fast evaluation of arithmetic expressions, memory manipulation, using multidimensional arrays and exploiting all C language control statements. The organization of the paper is as follows. Motivation of the paper is described in the second section. In the third section, we describe application details. In the fourth section, we develop data storage and computing system, which manipulate with different types of matrices, based on the P HP/M ySQL environment. Functions and procedures are hardwired to a common matrix table. Influence of different matrix representations in conjunction with storage systems to performances of the implemented software is considered through numerical experiments. All test matrices and the results can be stored in files and the database on the server-side. A few illustrative examples and comparative studies are presented in the last section.

2

Preliminaries and motivation

Some mathematical formulations and motivation are involved in this section. Let C be the set of complex numbers, Cm×n be the set of m × n complex matrices, and Cm×n = {X ∈ Cm×n : rank(X) = r}. For r m×n any matrix A ∈ C and positive definite matrices M and N of the orders m and n respectively, consider the following equations in X, where ∗ denotes conjugate and transpose: (1) AXA = A (3M ) (M AX)∗ = M AX

(2) XAX = X (4N ) (N XA)∗ = N XA.

The matrix X satisfying (1), (2), (3M) and (4N) is called the weighted Moore-Penrose inverse of A, and it is denoted by X = A†MN . In the particular case M = Im and N = In , the matrix X = A†MN comes to the Moore-Penrose inverse of A, and it is denoted by X = A† . The Greville’s partitioning method for numerical computation of generalized inverses is introduced in [6]. The following computational experience is delivered in [11]: ”when applied to a square, fully populated, non-symmetric case, with independent columns, the Greville’s algorithm found that the approach can be up to eight times faster than the conventional approach of using the SVD; rectangular cases are shown to yield similar levels of a speed increase”. Due to its computational dominance, this method has been extensively applied in many mathematical areas, such as statistical inference, filtering theory, linear estimation theory, optimization and more recently analytical dynamics [25] (see also [9]). An application of the partitioning method in a direct approach for computing the gradient of the pseudoinverse is presented in [11]. It has also found wide applications in the database and the neural network computation [13]. In the paper [7], the sequential determination of the Moore-Penrose generalized inverse matrix by dynamic programming is applied to the diagnostic classification of electromyography signals. The Greville’s partitioning algorithm is extended in many papers. Wang in [26] generalizes Greville’s method to the weighted Moore-Penrose inverse. The algorithm for computing the Moore-Penrose inverse of one-variable polynomial and/or rational matrices, based on the Greville’s partitioning algorithm, is established in [19]. An extension of the results from [19] to the set of two-variable rational and polynomial matrices is introduced in the paper [18]. In the paper [22] we extended the Wang’s partitioning method from [26] to the set of one-variable rational and polynomial matrices. An efficient implementation of the algorithm introduced in [22] is established in [17]. An implementation of the algorithm introduced in [21] for computing generalized inverses is based on the LU factorization of the matrix product. For the sake of completeness, we restate the algorithm introduced in [26], applicable to rational and constant matrices. The algorithm is quite appropriate for computation of the weighted Moore-Penrose inverse as well as the Moore-Penrose inverse and regular inverse.

Computation of the generalized inverses using P hp/M ySql environment

3

Algorithm 2.1 (G.R. Wang, Y.L. Chen.) Computing the weighted M-P inverse A†M,N . Require: Let A ∈ Rm×n , M and N be p.d. matrices of the order m and n respectively. 1: A1 = a1 . 2: if a1 = 0, then 3: X1 = (a∗1 M a1 )−1 a∗1 M ; 4: else 5: X1 = 0. 6: end if 7: for k = 2 to n do 8: dk = Xk−1 ak , ck = ak − Ak−1 dk , 9: if ck 6= 0, then 10: b∗k = (c∗k M ck )−1 c∗k M, goto Step 15, 11: else −1 12: δk = nkk + d∗k Nk−1 dk − (d∗k lk + lk∗ dk ) − lk∗ (I − Xk−1 Ak−1 )Nk−1 lk , 13: b∗k = δk−1 (d∗k Nk−1 − lk∗ )Xk−1 , 14: end if −1 Xk−1 − (dk + (I − Xk−1 Ak−1 )Nk−1 lk )b∗k . 15: Xk = b∗k 16: end for † 17: return AMN = Xn . Comparison of searching in large databases with recomputation in recursive algorithms is one of the questions on which we will focus in this article. The partitioning method is chosen in our article because of its recursive structure. Substantial effort is dedicated to a tight coupling of database and matrix computations, particularly in the inversion problem. Our idea is to memorize all input and resulting matrices in database tables, and as we need some of them, simply find results in appropriate table. Obviously, it is more efficient to find results on the server rather than perform recomputation, in the case when input matrices and requested operations are stored on the server side. This approach skips the computational logic and reduces the CPU time. We discuss the implementation of Algorithm 2.1 on a numerical DBMS and present experimental results demonstrating the performance benefit. Many scientists ignore the fact that most of data that are subject to analysis has been already exploited and can be stored in database systems, as well as that database systems provide powerful mechanisms for accessing, filtering and indexing data. We develop an application which memorizes all input matrices and results of performed matrix computations. The main challenge is at first to identify appropriate matrix representations and secondly to implement them using numerical DBMS extension facilities, e.g. storing, indexing, searching, etc. To achieve our aim, we implemented and tested different methodologies for storing matrices in the database and systems of the matrix representation (application logic) in the three-tier architecture. We develop an algorithm for storing every input matrix and results of computations by a database system. The algorithm is applied in order to develop the client/server Web application for computing the Moore-Penrose inverse, the weighted Moore-Penrose inverse as well as the fundamental matrix operations. Based on the analysis of available algorithms and our previous work in this area (see [17, 18, 19, 20, 22, 21, 23, 24]), we have identified operations, which are useful for a number of similar algorithms. The following figure illustrates our idea. We compare the input matrices (entered in the web forms) and required matrix operations with matrices and operations in the database table. If these matrices and required operations exist in a database table, the computational logic is determined by the steps (1, 2, 3, 4, 5). Otherwise, the computational logic follows the path (1, 2, 3, 6, 7, 5), as it is illustrated on Figure 1.

4

M.B. Tasi´c, P.S. Stanimirovi´c, S.H. Pepi´c,

Figure 1: A computing machine on the middle tier An extension of DBM S with fundamental vector and matrix operators supported by programming in the P HP/M ySQL environment is studied. We experimentally compare the influence of different matrix representations and storage systems in computation of the weighted Moore-Penrose inverse using client-server architecture and SQL, with respect to performance, ease of use, flexibility and scalability. The P HP scripting language is used as our middle-tier scripting language in the three-tier architecture model of the web database application. The research questions, we should answer in this article are the following: - Can P HP/M ySQL environments help in writing common matrix operations aimed for computing the generalized inverse? - Can we take advantages of the P HP language to implement vector and matrix operations in a database? - How different matrix representations, implemented using client-server architecture and SQL, enhances to performance, ease of use, flexibility and scalability of algorithms for computing generalized inverses? In the present article we, continue the idea used in [20], where the matrix database storage is used in the pseudo-inverse computation. Besides the routines which compute the weighted Moore-Penrose inverse, we develop a set of routines for implementation of the matrix library. In this way, we also continue results from [8], where a dynamic parallel matrix library is introduced. Our fundamental matrix operations include addition, subtraction, multiplication of two matrices, computation of determinants, as well as the pseudo-inverse and the weighted pseudo-inverse computations. All these operations are considered for both dense and sparse matrices, with a possibility of expansion on diagonal, triangular and symmetric sparse matrices. We also overcome results from the paper [14], where a set of FORTRAN subroutines for testing computer programs aimed for computing the generalized inverse is presented. Usage of the programming language P HP overcomes bounds of the FORTRAN and C subroutines from [14] and [8] in the following possibilities: 1. Usage of the Web oriented programming paradigm available in P HP in conjunction with HT M L and XM L; 2. Usage of the object oriented programming (OOP ); 3. Possibility to use the data storage system available in M ySQL database system; 4. Usage of the Apache Web server. Therefore, the main idea is to provide an appropriate client-server application, in the free open source P HP/M ySQL development environment, utilizing the minimum of resources: Internet browser and the operating system.

Computation of the generalized inverses using P hp/M ySql environment

3

5

Application details

Database developer SQL and SQL Server does not support direct operations on matrices. However, because tables and matrices share the same structures, SQL allows easy manipulations with matrices. The present article demonstrates a few SQL techniques for performing basic matrix operations. As we have mentioned, our idea is to implement a client/server application supported by the database usage, in order to find a solution which is already placed in the database. Our application makes possible operations with one, two and/or three arguments. The application is a combination of P HP/M ySql elements and implements special database operations, which support the SQL software implementation of a wide range of algorithms. Figure 2. shows how the application works.

Figure 2: The matrix computation system Each input matrix is, firstly, transformed into a string, with the elements separated by commas. Then the generated strings, subject to the required operations, are compared with data stored in the database. If the searching criteria are successfully ended, then a recomputation is not needed, and result will be displayed immediately in an appropriate web form. The possibilities offered by the application are the following: - unary operations: unary matrix operations, where only a single matrix and an optional coefficient is used to produce a unique result. (A−1 , rA, A† ,...); - binary operations: binary matrix operations, involving two matrices (A + B, Ap × B q ,...); - ternary operations: operations involving three matrices, such as the computation of the weighted Moore-Penrose inverse A†MN . User can define input matrices in two ways: - by element-wise entering values for elements of the matrices; - by uploading txt file, which contains matrix entries, under the assumption that matrix rows match with lines of the file and matrix elements are separated with the blank character. For this approach has been developed suitable Web interface illustrated in Figure 3.

6

M.B. Tasi´c, P.S. Stanimirovi´c, S.H. Pepi´c,

Figure 3: Web interface of the application How does the application work? General algorithm is defined globally based on the next three steps. The first step. The user chooses a matrix operation that will be implemented, via web browsers. The second step. Select the type of matrix that will be proceeded and define entries. There are three types of matrices for processing: dense, sparse or test matrices. More, it is possible to define entries of the input matrix in one of the next three ways, as it is illustrated on Figure 4. - loading the matrix from an existing textual file, using the button Browse. Matrix is stored in memory as txt file, assuming that the elements of the matrix are separated with a comma, and the matrix rows are separated by the new line. - entering the number of rows, columns and later values of the elements. - entering a complete matrix in the text area box, assuming that its entries are separated by the blank character. The third step: Generate and preview the solution.

Computation of the generalized inverses using P hp/M ySql environment

7

Figure 4: An example how to input matrix elements All the once handled matrices, chosen operations and the generated results of processing are stored on the server in the appropriate form in txt files. In this way, it is possible to avoid recomputations and download solutions of previously performed operations. Users are also allowed to use ready-made test samples.

4

The Storage system

The matrices and their storage representations implemented in the paper can be classified in the following two groups. Randomly generated and test matrices are stored in the database in two different storage systems. The row matrix format (R format) assumes that all matrix elements are placed in a vector, so that the matrix is represented as a string containing values of the matrix elements separated by the comma. On the other hand, the mR format represents a matrix in a database table under the assumption that the number of records in the table is equal to the number of rows of the matrix. Sparse unstructured matrices are matrices whose entries are equal to zero for the most part and possess a distribution of the nonzero entries which do not match any specific pattern. There are various storage schemes, which minimize the memory space and computational requirements, by storing and performing arithmetic with only the nonzero elements. The simplest sparse matrix storage structure

8

M.B. Tasi´c, P.S. Stanimirovi´c, S.H. Pepi´c,

is the Coordinate Format (COO format) [2], where the matrix is stored in three appropriate vectors, which represent the underlying sparse structure. The first vector (resp. the second vector) stores the row indices (resp. the column indices) for all non-zero entries. The third vector stores non-zero entries of the sparse matrix. The SQL codes for creating the data structure and tables on M ySQL database server are described as follows. The CREATE TABLE statement has three parts: - A table name succeeds the CREAT E T ABLE statement. - A list containing attribute names, types, and modifiers succeeds are placed between the parenthesis. - A key list follows after the attribute list between the parenthesis. In our implementation, we use only two different names for tables, namely matrices in and matrices out; # Table s t r u c t u r e f o r t a b l e ‘ m a t r i c e s i n ‘ CREATE TABLE m a t r i c e s i n ( i d i n i n t ( 1 1 ) NOT NULL a u t o i n c r e m e n t , e l e m e n t s i n l o n g t e x t NOT NULL, d i m e n s i o n t i n y t e x t NOT NULL, t e s t t i n y t e x t NOT NULL, s p a r s e t i n y t e x t NOT NULL, PRIMARY KEY ( i d i n ) ) TYPE=MyISAM; # Table s t r u c t u r e f o r t a b l e ‘ m a t r i c e s o u t ‘ CREATE TABLE m a t r i c e s o u t ( i d o u t i n t ( 1 1 ) NOT NULL a u t o i n c r e m e n t , e l e m e n t s o u t l o n g t e x t NOT NULL, o p e r a t i o n t i n y t e x t NOT NULL, m a t r i x I i n t ( 1 1 ) NOT NULL d e f a u l t ’ 0 ’ , m a t r i x I I i n t ( 1 1 ) NOT NULL d e f a u l t ’ 0 ’ , m a t r i x I I I i n t ( 1 1 ) NOT NULL d e f a u l t ’ 0 ’ , r t i n y i n t ( 4 ) NOT NULL d e f a u l t ’ 0 ’ , s t i n y i n t ( 4 ) NOT NULL d e f a u l t ’ 0 ’ , p t i n y i n t ( 4 ) NOT NULL d e f a u l t ’ 0 ’ , q t i n y i n t ( 4 ) NOT NULL d e f a u l t ’ 0 ’ , PRIMARY KEY ( i d o u t ) ) TYPE=MyISAM;

The fields used in the database tables are described as follows. - id in or id out: identification number, defined as auto increment; - elements in or elements out: string of type longtext, containing matrix entries separated with the comma character. - dimension: string of the form ’m x n’ which contains dimensions m and n of the matrix; - test: string representing the name of a test matrix from [28], or the empty string (if the input matrix is not a test matrix); - sparse: flag of type tinytext, possessing the value ’0’ if the matrix is non-sparse or one of the values, ’1’, ’2’, ’3’ for a sparse matrix; this field is important for searching the matrices already stored in the database; - operation: It defines operations on input matrices (A + B, rA + sB, A−1 + B −1 ...) or defines the computation of the weighted Moore-Penrose inverse; - matrix I, matrix II and matrix III: contain IDs from the table matrices in or default value ’0’ in order to find a solution for the chosen operation and entered matrices; - r, s, p and q: optional coefficients used to determine matrix operations rA, sB, Ap and B q , respectively. It is clear that dimensions of matrices and vectors used as arguments are limited by the maximal length of the type longtext, which is within the interval [0, 232 − 1]; Example 4.1. In the present example matrix A defines the test matrix A from [28] in the particular case a = 1, matrix B is randomly generated and the matrix C is sparse and unstructured.

Computation of the generalized inverses using P hp/M ySql environment

matrix A =

matrix B =

11 10 9 8 7 6 5 4 3 2 1 282 −11 −206 −39 84 94

matrix C =

1 0 1 0 0 0 0 0 0 0

10 10 9 8 7 6 5 4 3 2 1

9 9 9 8 7 6 5 4 3 2 1

−11 241 −80 129 121 −86 2 2 0 0 4 0 0 0 0 0

3 0 0 0 0 0 0 0 0 0

8 8 8 8 7 6 5 4 3 2 1

7 7 7 7 7 6 5 4 3 2 1

6 6 6 6 6 6 5 4 3 2 1

5 5 5 5 5 5 5 4 3 2 1

−206 −80 306 4 −113 2

−39 129 4 394 −19 −219

0 0 4 0 0 0 0 0 0 0

0 0 0 0 8 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

4 4 4 4 4 4 4 4 3 2 1

3 3 3 3 3 3 3 3 2 1 0

84 121 −113 −19 119 15 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

2 2 2 2 2 2 2 2 1 0 −1

9

11×10

94 −86 2 −219 15 184 0 0 0 0 0 0 0 0 0 2

6×6

10×10

The corresponding fragment of the table matrices in is illustrated in Table 1.

id in 1 2 3 4 5

matrices in elements in dimension test 11, 10, 9, ..., 1, 1, 1, 0, −1 11 × 10 A 10 11 282,-11,-206,...,2,-219,15,184 6×6 ’’ 0,0,0,1,2,2,4,4,9 10 × 10 ’’ 0,1,2,1,0,3,1,5,9 10 × 10 ’’ 1,2,3,2,1,4,4,8,2 10 × 10 ’’ Table 1. Matrices in the database table matrices in

sparse 0 0 1 2 3

Three different matrices are stored in Table 1. The matrix A, which represents the test matrix A11×10 from [28], is accommodated in the first row of the table. The representation of the matrix B is placed in its second row. In the third, fourth and fifth rows is the sparse matrix representation for matrix C in the COO format. The adjoint fragment of the table matrices out is illustrated by Table 2, where A(−1) and A(M N ) denote the inverse and the weighted Moore-Penrose inverse, respectively.

id out 1 2 3 4 5 4

matrices out elements out operation matrix I matrix II matrix III 1,-1,0,...,-0.25,-0.417 A(−1) 1 0 0 1974,-77,...,105,1288 r∗A+s∗B 2 2 0 0,0,0,0,1,2,2,2,4,9 A∗B 3 3 0 0,1,2,3,1,0,1,2,1,9 A∗B 4 4 0 4,6,3,12,4,1,2,3,8,4 A∗B 5 5 0 0.755,-0.156,...,0.526,-0.2 A(M N ) 18 14 17 Table 2. Matrices in database table matrices out

r 0 3 0 0 0 0

s 0 4 0 0 0 0

p 0 0 0 0 0 0

q 0 0 0 0 0 0

10

4.1

M.B. Tasi´c, P.S. Stanimirovi´c, S.H. Pepi´c,

Relations between matrix representations and matrix storage systems

We present different methods for computing the pseudo-inverse A†MN and executing fundamental matrix operations, depending on the relationship between the implemented storage systems and matrix representations. As we have already mentioned, matrices are stored in the database in three formats: the R format, the mR format and the COO format (only for the sparse matrix). To accelerate the search of the resulting matrices stored in the database as well as to improve the pseudo-inverses computation, we exploit correlations between the presented storage systems indicated in Figure 5.

Figure 5: Relationship between storage systems and matrix representations

We implemented different computational systems for dense as well as for sparse matrices in order to detect the best. For dense matrices we use the computational systems illustrated on Figure 6. a. For sparse matrices we use a more suitable computational system, as shown on Figure 6. b.

Figure 6: The computational systems for dense and sparse matrices

U DF s for both dense and sparse matrices are implemented in the 2D array format. U DF s for sparse matrices are adapted to COO format storing system, opposite to U DF s for dense matrices, which are adapted to R format.

Computation of the generalized inverses using P hp/M ySql environment

4.2

11

Implementation of matrix operations

Implementation of an arbitrary matrix operation in the database tier can be described by the following algorithm. Algorithm 4.2 Implementation of an arbitrary matrix operation. Require: Input matrices. 1: Define the input matrices and/or coefficients for processing and select the matrix operation. 2: Form the strings which define elements of entered matrices, applying Procedure 4.2 for each input (dense) matrix. 3: Search entered matrix in the database performing Procedure 4.3 4: if the input matrices and required operations exist in the database then 5: search the solution applying Procedure 4.5 6: if the solution exists then 7: goto Step 16 8: else 9: goto Step 14 10: end if 11: else 12: Store entered matrix by applying Procedure 4.4 and goto Step 14. 13: end if 14: Processing the input matrix executing Procedure 4.8. 15: Store the result performing Procedure 4.7. 16: return the result from the database using Procedure 4.6 We provide some procedures, applicable to dense matrices, that are used in our implementation. Procedure 4.1. Procedure 4.1 defines the input matrix by loading the corresponding txt file. It can be used in Step 1 of Algorithm 4.2. function u p l o a d f i l e (){ g l o b a l $ u s e r f i l e , $ u s e r f i l e n a m e , $ u s e r f i l e s i z e , $ u s e r f i l e t y p e , $ a r c h i v e d i r , $WINDIR ; i f ( i s s e t ($WINDIR ) ) $ u s e r f i l e =$ s t r r e p l a c e ( ” \ \ \ \ ” , ” \ \ ” , $ u s e r f i l e ) ; $ f i l e n a m e=basename ( $ u s e r f i l e n a m e ) ; i f ( $ u s e r f i l e s i z e <= 0 ) d i e (” $ f i l e n a m e i s empty . ” ) ; i f ( ! @copy ( $ u s e r f i l e , ” $ a r c h i v e d i r / $ f i l e n a m e ” ) ) d i e (” Can ’ t copy $ u s e r f i l e n a m e to $ f i l e n a m e . ” ) ; i f ( ! i s s e t ($WINDIR)&&! @unlink ( $ u s e r f i l e ) ) d i e (” Can ’ t d e l e t e th e f i l e $ u s e r f i l e n a m e . ” ) ; }

Procedure 4.2. Code used in Step 2 of Algorithm 4.2 for defining the string (vector) which includes entries of the dense matrix. f u n c t i o n m a k e v e c to r ( $ f i l e ){ $m=c o u n t ( $ f i l e ) ; / / number o f rows $ s t r i n g =””; f o r ( $ i =0; $ i <$m ; $ i ++){ $ f i l e [ $ i ]= r t r i m ( $ f i l e [ $ i ] ) . ” , ” ; $ s t r i n g=$ s t r i n g . $ f i l e [ $ i ] ; } $ s t r i n g 1=$ s t r r e p l a c e (” ” , ” , ” , $ s t r i n g ) ; $ s t r i n g 2=s u b s t r ( $ s t r i n g 1 , 0 , s t r l e n ( $ s t r i n g 1 ) − 1 ); / / v e c t o r o f a l l e l e m e n t s from m a tr i x return $string2 ;}

Procedure 4.3. The searching of a dense matrix with randomly chosen elements is implemented by the next function. The search of a test matrix is defined by its unique name.

12

M.B. Tasi´c, P.S. Stanimirovi´c, S.H. Pepi´c,

f u n c t i o n i f e x i s t ( $ f i l e ){ $ d b l i n k=d b c o n n e c t ( ) ; / / c o n n e c t on th e d a t a b a s e $m=m( $ f i l e ) ; / / number o f rows $n=n ( $ f i l e ) ; / / number o f columns $ d i m e n s i o n=$m . ” x ” . $n ; $ s t r i n g 2=m a k e v e c to r ( $ f i l e ) ; $ q u e r y=” s e l e c t ∗ from MATRICES IN where DIMENSION=’ $ d i m e n s i o n ’ ” ; $ r e s u l t=m y s q l q u e r y ( $query , $ d b l i n k ) ; $sum=0; w h i l e ( $row=m y s q l f e t c h r o w ( $ r e s u l t ) ) { i f ( $row [1]== $ s t r i n g 2 ){ $sum++; b r e a k ; } } i f ( $sum==1){ $ i d m a t r i x=$row [ 0 ] ; } / / m a tr i x e x i s t else $ i d m a t r i x=f a l s e ; / / m a tr i x don ’ t e x i s t return $id matrix ;}

Procedure 4.4. If the input matrix does not exist in the database, we store it applying the next code. f u n c t i o n e n t e r m a t r i x ( $ s t r i n g 2 , $ d i m e n s i o n , $ t e s t ){ $ d b l i n k=d b c o n n e c t ( ) ; $ q u e r y=” i n s e r t i n t o MATRICES IN (ELEMENTS IN , DIMENSION, TEST) v a l u e s ( ’ $ s tr i n g 2 ’ , ’ $dimension ’ , ’ $ te s t ’ ) ” ; $ r e s u l t=m y s q l q u e r y ( $query , $ d b l i n k ) ; }

Procedure 4.5. If the entered matrices and operation already exist, we search the result from the database. f u n c t i o n r e s u l t e x i s t ( $ o p e r a t i o n , $matrix1 , $matrix2 , $matrix3 , $r , $s , $p , $q ){ $db link = db connect ( ) ; $ q u e r y = ” s e l e c t ∗ from MATRICES OUT where OPERATION =’ $ o p e r a t i o n ’ and MATRIX I=’ $matrix1 ’ and MATRIX II=’ $matrix2 ’ and MATRIX III=’ $matrix3 ’ and R=’ $r ’ and S=’ $s ’ and P=’$p ’ and Q=’$q ’ ” ; $ r e s u l t=m y s q l q u e r y ( $query , $ d b l i n k ) ; $row=m y s q l f e t c h a r r a y ( $ r e s u l t ) ; i f ( $row [1]==NULL){ $ m a t r i x o u t=f a l s e ; } e l s e { $ m a t r i x o u t=e x p l o d e ( ” , ” , $row [ ”ELEMENTS OUT ” ] ) ; } return $matrix out ; }

Procedure 4.6. If the result exists in the database table, we display it. f u n c t i o n p r e s e n t e d r e s u l t ( $ r e s u l t , $m, $n ){ f o r ( $ i =0 , $z =0; $ i <$m ; $ i ++){ f o r ( $ j =0; $ j <$n ; $ j ++,$z++){ $ r e s u l t= round ( $ r e s u l t [ $z ] , 3 ) ; echo ”< i n p u t name=e l e m e n t s [ ] ty p e=t e x t s i z e =\”4\” maxlength =\”5\” v a l u e =\” $ r e s u l t \” c l a s s =\” s t y l e 2 1 \” r e a d o n l y =\” r e a d o n l y \”> ” ; } echo ”

”;} }

Procedure 4.7. The next code is aimed to store the result of the matrix manipulation. f u n c t i o n e n t e r s o l u t i o n ( $ r e s u l t , $ o p e r a t i o n , $id m1 , $id m2 , $id m3 , $r , $s , $p , $q ){ $ d b l i n k=d b c o n n e c t ( ) ; $ s t r i n g=i m p l o d e ( $ r e s u l t , ” , ” ) ; $ q u e r y=” i n s e r t i n t o MATRICES OUT(ELEMENTS OUT,OPERATION, MATRIX I , MATRIX II , MATRIX III , R, S , P ,Q) v a l u e s ( ’ $ s t r i n g ’ , ’ $ o p e r a t i o n ’ , ’ $id m1 ’ , ’ $id m2 ’ , ’ $id m3 ’ , ’ $r ’ , ’ $s ’ , ’ $p ’ , ’ $q ’ ) ” ; $ r e s u l t=m y s q l q u e r y ( $query , $ d b l i n k ) ; }

4.3

The pseudo-inverse computation

The P hp code which implements the weighted Moore-Penrose inverse A†M,N according to Algorithm 2.1. Branching in the code is defined according to the selected storage system for our database table.

Computation of the generalized inverses using P hp/M ySql environment

Procedure 4.8. The main function for computation of the Weighted Moore-Penrose inverse. f u n c t i o n We i g h te d In v e r s e ( $ArrayDataMatrixM , $ArrayDataMatrixA , $ArrayDataMatrixN ){ $rows=c o u n t ( $ArrayDataMatrixA ) ; $columns=c o u n t ( $ArrayDataMatrixA [ 0 ] ) ; $aa=i t h C o l ( 1 , $ArrayDataMatrixA ) ; i f ( i s Z e r o ( $aa )==0) $ a r=T r a n s p o s e ( $aa ) ; else { $ ta=T r a n s p o s e ( $aa ) ; $ a l b=M u l t i p l i c a t i o n M a t r i c e s ( $ta , $ArrayDataMatrixM ) ; $ a l i=M u l t i p l i c a t i o n M a t r i c e s ( $ a l b , $aa ) ; $ i n v=I n v e r s e ( $ a l i ) ; $ a r=M u l t i p l i c a t i o n M a t r i c e s ( $inv , $ a l b ) ; } f o r ( $ i =2; $ i <=$columns ; $ i ++){ $ i i =i t h C o l ( $ i , $ArrayDataMatrixA ) ; $ d i=M u l t i p l i c a t i o n M a t r i c e s ( $ar , $ i i ) ; $ f c=f r s t i C o l ( $ i −1 , $ArrayDataMatrixA ) ; $mm=M u l t i p l i c a t i o n M a t r i c e s ( $ f c , $ d i ) ; $ c i=S u b s M a t r i c e s ( $ i i ,$mm) ; $nim1=mMinus( $ArrayDataMatrixN , $ i ) ; $nim1g=I n v e r s e ( $nim1 ) ; $ l i =mColumn ( $ArrayDataMatrixN , $ i ) ; i f ( i s Z e r o ( $ c i )==0){ $ n i i=m s k a l a r ( $ArrayDataMatrixN , $ i ) ; $ d i t=T r a n s p o s e ( $ d i ) ; $dtn=M u l t i p l i c a t i o n M a t r i c e s ( $ d i t , $nim1 ) ; $dtnd=M u l t i p l i c a t i o n M a t r i c e s ( $dtn , $ d i ) ; $dtnd=matNum ( $dtnd ) ; $ d i t l=M u l t i p l i c a t i o n M a t r i c e s ( $ d i t , $ l i ) ; $ l i t =T r a n s p o s e ( $ l i ) ; $ l i t d i=M u l t i p l i c a t i o n M a t r i c e s ( $ l i t , $ d i ) ; $ d i t d i=SumaMatrices ( $ d i t l , $ l i t d i ) ; $ d i t d i=matNum ( $ d i t d i ) ; $ n i m 1 l i=M u l t i p l i c a t i o n M a t r i c e s ( $nim1g , $ l i ) ; $ l i t n l i =M u l t i p l i c a t i o n M a t r i c e s ( $ l i t , $ n i m 1 l i ) ; $ l i t n l i =matNum ( $ l i t n l i ) ; $ l k t a r=M u l t i p l i c a t i o n M a t r i c e s ( $ l i t , $ a r ) ; $ a r n k l k=M u l t i p l i c a t i o n M a t r i c e s ( $ f c , $ n i m 1 l i ) ; $novo=M u l t i p l i c a t i o n M a t r i c e s ( $ l k t a r , $ a r n k l k ) ; $novo=matNum ( $novo ) ; $ s i=$ n i i+$dtnd−$ d i t d i − $ l i t n l i +$novo ; $ d tn a r=M u l t i p l i c a t i o n M a t r i c e s ( $dtn , $ a r ) ; $ d i l i t=S u b s M a t r i c e s ( $dtnar , $ l k t a r ) ; $ b i t=S c a l a r M a t r i x ( $ d i l i t , 1 / $ s i ) ; } else { $ c i t=T r a n s p o s e ( $ c i ) ; $ c i tm=M u l t i p l i c a t i o n M a t r i c e s ( $ c i t , $ArrayDataMatrixM ) ; $pom=M u l t i p l i c a t i o n M a t r i c e s ( $citm , $ c i ) ; $pom=matNum ( $pom ) ; $ b i t=S c a l a r M a t r i x ( $citm , 1 / $pom ) ; } $ n i m 1 l i=M u l t i p l i c a t i o n M a t r i c e s ( $nim1g , $ l i ) ; $ a r a i=M u l t i p l i c a t i o n M a t r i c e s ( $ar , $ f c ) ; $arnim1=M u l t i p l i c a t i o n M a t r i c e s ( $ a r a i , $ n i m 1 l i ) ; $ p i=S u b s M a t r i c e s ( $ n i m 1 l i , $arnim1 ) ; $k1=M u l t i p l i c a t i o n M a t r i c e s ( $ d i , $ b i t ) ; $ a r k 1=S u b s M a t r i c e s ( $ar , $k1 ) ; $ p i b=M u l t i p l i c a t i o n M a t r i c e s ( $ p i , $ b i t ) ; $ a r=S u b s M a t r i c e s ( $ark1 , $ p i b ) ; $ a r=appRow ( $ar , $ b i t ) ; } r etu r n $ar ; }

Implementation of previous function requires several auxiliary procedures.

13

14

M.B. Tasi´c, P.S. Stanimirovi´c, S.H. Pepi´c,

Function ithCol generates the ith column ai of A. f u n c t i o n i t h C o l ( $column , $ArrayDataMatrix1 ){ $rows1=c o u n t ( $ArrayDataMatrix1 ) ; $columns1=c o u n t ( $ArrayDataMatrix1 [ 0 ] ) ; f o r ( $ i =0; $ i <$rows1 ; $ i ++) { f o r ( $ j =0; $ j <$columns1 ; $ j ++){ i f ( $ j+1==$column ) $ArrayCol [ $ i ] [ 0 ] = $ArrayDataMatrix1 [ $ i ] [ $ j ] ; } } r e t u r n $ArrayCol ; }

The submatrix Aj =[a 1,..., aj ] which contains first j ≤ n columns of the matrix A = An =[a 1,..., an ] can be generated performing the next function: f u n c t i o n f r s t i C o l ( $columns , $ArrayDataMatrix1 ){ $rows1=c o u n t ( $ArrayDataMatrix1 ) ; $columns1=c o u n t ( $ArrayDataMatrix1 [ 0 ] ) ; f o r ( $ i =0; $ i <$rows1 ; $ i ++){ f o r ( $ j =0; $ j <$columns1 ; $ j ++){ i f ( $ j+1<=$columns ) $ArrayCol [ $ i ] [ $ j ]= $ArrayDataMatrix1 [ $ i ] [ $ j ] ; } } r e t u r n $ArrayCol ; }

Procedure matNum() converts matrix of dimension 1 × 1 into the number. f u n c t i o n matNum ( $ArrayDataMatrix1 ){ $rows1=c o u n t ( $ArrayDataMatrix1 ) ; $columns1=c o u n t ( $ArrayDataMatrix1 [ 0 ] ) ; i f ( $rows1==1&$columns1==1) r e t u r n $ArrayDataMatrix1 [ 0 ] [ 0 ] ; }

Function isZero() checks whether the parameter is zero matrix or not. f u n c t i o n i s Z e r o ( $ArrayDataMatrix1 ){ $ i s Z e r o =0; $rows1=c o u n t ( $ArrayDataMatrix1 ) ; $columns1=c o u n t ( $ArrayDataMatrix1 [ 0 ] ) ; f o r ( $ i =0; $ i <$rows1 ; $ i ++){ f o r ( $ j =0; $ j <$columns1 ; $ j ++){ i f ( round ( $ArrayDataMatrix1 [ $ i ] [ $ j ] , 3 ) ! = 0 ) $ i s Z e r o++; } } return $isZero ;}

Function appRow() appends the column vector Y to the matrix X. f u n c t i o n appRow ( $ArrayDataMatrix1 , $ArrayDataMatrix2 ){ $ i s Z e r o =0; $rows1=c o u n t ( $ArrayDataMatrix1 ) ; $columns2=c o u n t ( $ArrayDataMatrix2 [ 0 ] ) ; f o r ( $ i =0; $ i <$columns2 ; $ i ++){ $ArrayDataMatrix1 [ $rows1 ] [ $ i ]= $ArrayDataMatrix2 [ 0 ] [ $ i ] ; } r e t u r n $ArrayDataMatrix1 ; }

Finally, the function Inverse() returns the inverse of the input parameter. f u n c t i o n I n v e r s e ( $ArrayDataMatrix ){ $rows=c o u n t ( $ArrayDataMatrix ) ; $columns=c o u n t ( $ArrayDataMatrix [ 0 ] ) ; $ n i i=m s k a l a r ( $ArrayDataMatrix , 1 ) ; $N=1/ $ n i i ; $N=a r r a y ( a r r a y ($N ) ) ; f o r ( $ i =2; $ i <=$columns ; $ i ++){ $ s=m s k a l a r ( $ArrayDataMatrix , $ i ) ; $ n i i=a r r a y ( a r r a y ( $ s ) ) ; $ l i=mColumn ( $ArrayDataMatrix , $ i ) ; $ l i t=T r a n s p o s e ( $ l i ) ; $ g i i p=M u l t i p l i c a t i o n M a t r i c e s ( $ l i t , $N ) ; $ g i i 1=M u l t i p l i c a t i o n M a t r i c e s ( $ g i i p , $ l i ) ; $ g i i=S u b s M a t r i c e s ( $ n i i , $ g i i 1 ) ; $ g i i 2=matNum ( $ g i i ) ; $ g i i =1/ $ g i i 2 ; $ f i p=M u l t i p l i c a t i o n M a t r i c e s ($N , $ l i ) ; $ f i 1=S c a l a r M a t r i x ( $ f i p , $ g i i ) ; $ f i=S c a l a r M a t r i x ( $ f i 1 , − 1 ) ;

Computation of the generalized inverses using P hp/M ySql environment

$ f i t=T r a n s p o s e ( $ f i ) ; $E1=M u l t i p l i c a t i o n M a t r i c e s ( $ f i , $ f i t ) ; $E2=S c a l a r M a t r i x ( $E1 , 1 / $ g i i ) ; $E=SumaMatrices ($N , $E2 ) ; $ArrayDataMatrix1=appRow ( $E , $ f i t ) ; $ g i i n=a r r a y ( a r r a y ( $ g i i ) ) ; $ArrayDataMatrix2=appRow ( $ f i , $ g i i n ) ; $N=f i n a l y ( $ArrayDataMatrix1 , $ArrayDataMatrix2 ) ; r e t u r n $N ; }

5

15

}

Examples

Example 5.1. In the next table, we compare the CPU time spent for searching the matrices stored in the database by two different storage systems. Search of stored matrices number of matrices R format mR format 50 0.102 sec. 0.647 sec. 100 0.166 sec. 0.660 sec. 500 0.7909 sec. 14.079 sec. 1000 1.901 sec. 28.650 sec.

Table 3. The CPU time for searching matrices dimension 70 × 70 in the R and mR format.

This testing is performed for dense matrices with randomly chosen elements. from Table 3. we conclude that the CPU time for searching is much shorter when the R format is used. Let us mention that the searching of the test matrices from [28] is based on the search by a unique name, and does not depend on the number of the matrices stored in the database. Example 5.2. In Table 4. are arranged CP U times required for computation of the pseudo-inverse A†MN . Matrices are represented in two different ways: as 1D arrays based on the mR and R format, or in the form of 2D arrays combined with the R format storage system.

m×n A 30 3 A 50 4 F 15 2 F 30 3 F 50 4 S 50 4 S 80 5 50 × 50 15 × 15 50 × 35 45 × 70 60 × 60

A†M N matrix representation: storage system 1D array : mR format 1D array: R format 2D array: R format 4.485 sec. 2.440 sec. 0.897 sec. 32.834 sec. 21.270 sec. 6.199 sec. 0.666 sec. 0.342 sec. 0.166 sec. 4.379 sec. 2.347 sec. 0.880 sec. 32.564 sec. 20.981 sec. 6.077 sec. 32.597 sec. 20.874 sec. 6.070 sec. 269.282 sec. 206.007 sec. 40.75 sec. 32.423 sec. 21.294 sec. 6.203 sec. 0.632 sec. 0.471 sec. 0.160 sec. 9.79 sec. 5.587 sec. 1.997 sec. 125.18 sec. 86.210 sec. 19.920 sec. 70.260 sec. 49.964 sec. 12.627 sec.

Table 4. Computation time for A†M N when matrices don’t exist in the database.

Results of the testing show that the best processing time is achieved in the case when matrices are presented in the form of 2D arrays. Example 5.3. In this example, we compare the CPU times obtained during the computation of the pseudo-inverse A†MN (in the case when the input matrices are not stored in the database) with the CPU times required for searching the same results (in the case when the input matrices are stored in the database). Matrices are presented in 2D array representation, R format storage system.

16

M.B. Tasi´c, P.S. Stanimirovi´c, S.H. Pepi´c,

m×n 20 × 20 30 × 30 45 × 45 50 × 50 60 × 60 70 × 70 80 × 80

A†M N matrix representation: storage system 2D array : Rf ormat (exist) 2D array : Rf ormat (don’t exist) 0.043 sec. 0.309 sec. 0.051 sec. 2.105 sec. 0.062 sec. 4.665 sec. 0.070 sec. 6.729 sec. 0.075 sec. 12.675 sec. 0.079 sec. 26.614 sec. 0.096 sec. 43.809 sec.

Table 5. The CPU time for A†M N when matrices exist and don’t exist in the database.

The CPU time required for searching the solution from the database is negligible with respect to the time needed for the recalculation, especially if the matrix has larger dimensions. Let us mention that 10030 records are stored in the table matrices in at the moment of testing. Example 5.4. Comparison of the CP U time for executing fundamental matrix operations is given in Table 6. m×n 3×3 3×3 5×5 5×5 10 × 10 10 × 10 20 × 50 50 × 50 45 × 45 45 × 70 80 × 80 80 × 60 80 × 70 70 × 70 81 × 81 81 × 81 3×3 3×3 5×5 5×5 10 × 10 10 × 10 50 × 50 50 × 50 60 × 60 60 × 60 70 × 70 70 × 70 80 × 80 80 × 80 3×3 3×3 5×5 5×5 10 × 10 10 × 10 50 × 50 50 × 50 60 × 60 60 × 60 70 × 70 70 × 70 81 × 81 81 × 81

Fundamental operation on matrices operation 1D f ormat: Rf ormat multiplication 0.0468 sec. multiplication 0.0481 sec. multiplication 0.0643 sec. multiplication 0.1531 sec. multiplication 0.5618 sec. multiplication 0.9325 sec. multiplication 0.9675 sec. multiplication 1.2831 sec. addition 0.0425 sec. addition 0.0431 sec. addition 0.0431 sec. addition 0.1400 sec. addition 0.2012 sec. addition 0.2481 sec. addition 0.3287 sec. substraction 0.0475 sec. substraction 0.0512 sec. substraction 0.0350 sec. substraction 0.1431 sec. substraction 0.1875 sec. substraction 0.2512 sec. substraction 0.3150 sec.

2D array: Rf ormat 0.0043 sec. 0.0042 sec. 0.0062 sec. 0.0778 sec. 0.1369 sec. 0.7270 sec. 0.5600 sec. 0.7470 sec. 0.0040 sec. 0.0043 sec. 0.0048 sec. 0.0326 sec. 0.0480 sec. 0.0705 sec. 0.0919 sec. 0.0041 sec. 0.0044 sec. 0.0050 sec. 0.0319 sec. 0.0733 sec. 0.0415 sec. 0.1196 sec.

Table 6. The CPU time for 1D array and 2D array in R format processing.

The arranged results show that the smallest computational time is obtained in the case when the matrix is presented in the format of 2D array, R format storage system. Testing was done on the local machine and from client in a wireless network. We had an access to the web server using the infrastructure mode wireless networking with an access point. Testing was executed on the server machine with: Windows edition: W indows V ista(T M) U ltimate; Processor: Intel(R) P entium(R) Dual CP U T 3200 @ 2.00GHz; Memory (RAM ) : 2940M B; System type: 32 − bit Operating System; Free Software: W AM P 5 1.7.4 installs: P HP 5.2.5, Apache 2.2.6 Server, M ySQL 5.0.45 and phpM yAdmin 2.11.2.1. The CP U time shows that searching of a matrix which is given in the R format gives better results than the searching of the same matrix presented in the mR format. Furthermore, the best results are obtained in the case when matrices are represented in view of the two-dimensional arrays (on the middle tier), application logic adapted to the R format, and matrices are stored in the R format in the database. Searching time is inversely proportional with the number of data transferred between the middle of the application tier and the database tier.

Computation of the generalized inverses using P hp/M ySql environment

6

17

Conclusions

This research contributes to the development of an OSS application and construction of the matrix library, especially in computation of the generalized inverses. Several storage technics are discussed and tested in the P HP/M ySQL environment. Data’s preparation as an important preprocessing step of data mining is a further application for database matrix computations. There are several issues for future work. We plan to develop mechanisms which decrease memory usage and minimize the searching time in accordance with different storage systems. Furthermore, it is possible to adapt U DF s to sparse structured matrices, such as diagonal matrices, sparse symmetric matrices, triangular matrices, Toeplitz matrices, etc. We need to identify other mathematical operations with wide applicability that can be implemented with the P HP/M ySQL environment, thereby enhancing the DBM S data mining functionality. Also, further implementation of the matrix library will be based on the principles of Object-Oriented P rogramming.

References [1] J. Clear, D. Dunn, B. Harvey, M.L. Heytens, and P. Lohman, Non-stop SQL/MX primitives for knowledge discovery, In ACM KDD Conference, (1999) 425-429. [2] T. Chong, Shamik D. Sharma, E. Brewer and J. Saltz, Sparse matrix application, Parallel Processing Letters. Volume 5, Number 4 (1995), 671–683. [3] S. Egilsson, H. Gudbjartsson and S. Sigurjnsson, SQL query generator utilizing matrix structures, U.S. Patent 6,578,028, Jun 10, 2003. [4] R. Elmasri and S.B. Navathe, Fundamentals of Database Systems, Addison-Wesley, 4th edition, 2003. [5] J. Greenspan and B. Bulger, MySQL/PHP Database Applications M&T Books: An imprint of IDG Books Worldwide, Inc., 2001. [6] T. N. E., Grevile, Some applications of the pseudo-inverse of matrix SIAM Rev. 3 (1960), 15–22. [7] C. Itiki, Dynamic programming and diagnostic classification, J. Optim. Theory Appl. 127 (2005), 579-586. [8] E. Karaman and E. Kayhan, A client server based model for parallel matrix computation, SIAM Rev. 3 (1960), 15–22. [9] T. Kurmayya and K.C. Sivakumar, Moore-Penrose inverse of a Gram matrix and its nonnegativity, J. Optim. Theory Appl., 139 (2008), 201-207. [10] S.Y.T. Lee, H.W. Kim and S. Gupta, Measuring open source software success, OMEGA-Int. J. Manage S. 37 (2009), 426–438. [11] J.B. Layton, Efficient direct computation of the pseudo-inverse and its gradient, Internat. J. Numer. Methods Engrg. 40 (1997), 4211–4223. [12] J. Meloni, PHP 5, MA: Thomson Course Techology, Boston, 2004. [13] S. Mohideen and V. Cherkassky, On recursive calculation of the generalized inverse of a matrix, ACM Trans. Math. Software 17 (1991), 130-147. [14] J.C. Nash and R.L.C. Wang, Algorithm 645 Subroutines for testing programs that compute the generalized inverse of a matrix, ACM Trans. Math. Software 12 (1986), 274–277.

18

M.B. Tasi´c, P.S. Stanimirovi´c, S.H. Pepi´c,

[15] C. Ordonez and J. Garc´ıa, Vector and matrix operations programmed with UDFs in a relational DBMS, Proceedings of the 15th ACM international conference on Information and knowledge management, Arlington, Virginia, USA. 2006, 503–512. [16] R. Page and P. Factor, SQL Server Matrix Workbench, (2008) http://www.simple-talk.com/sql/t-sql-programming/sql-server-matrix-workbench/.

Web

site:

[17] M. D. Petkovi´c, P.S. Stanimirovi´c and M.B. Tasi´c, Effective partitioning method for computing weighted MoorePenrose inverse, Comput. Math. Appl. 55 (2008), 1720-1734. [18] M.D. Petkovi´c and P.S. Stanimirovi´c, Symbolic computation of the Moore-Penrose inverse using partitioning method, Int. J. Comput. Math. 82 (2005), 355–367. [19] P. S. Stanimirovi´c and M.B. Tasi´c, Partitioning method for rational and polynomial matrices, Appl. Math. Comput. 155 (2004), 137–163. [20] P.S. Stanimirovi´c and M.B. Tasi´c, A problem in computation of pseudoinverses, Appl. Math. Comput. 135 (2003), 443–469. [21] P.S. Stanimirovi´c and M.B. Tasi´c, Computing generalized inverses using LU factorization of matrix product, Int. J. Comput. Math. 85 (2008), 1865-1878. [22] M. B. Tasi´c, P.S. Stanimirovi´c and M.D. Petkovi´c, Symbolic computation of weighted Moore-Penrose inverse using partitioning method, Appl. Math. Comput. 155 (2004), 137–163. [23] M. B. Tasi´c, P. S. Stanimirovi´c and S. H. Pepi´c, About the generalized LM-inverse and the weighted MoorePenrose inverse, Appl. Math. Comput. 216 (2010), 114-124. [24] M. B. Tasi´c, P. S. Stanimirovi´c, Differentiation of generalized inverses for rational and polynomial matrices, Appl. Math. Comput. 216 (2010), 2092-2106. [25] F.E. Udwadia and R.E. Kalaba, Analytical Dynamics: A New Approach, Cambridge University Press, Cambridge, England, 1996. [26] G. R. Wang and Y.L.Chen, A recursive algorithm for computing the weighted Moore-Penrose inverse A†MN , J. Comput. Math. 4 (1986), 74–85. [27] H. E. Williams and D. Lane, Web Database Applications with PHP and MySQL, O’Reilly & Associates, Inc., 2002. [28] G. Zielke, Report on test matrices for generalized inverses, Computing 36 (1986), 105–162.