An extreme learning machine (ELM) is a feedforward neural network (FNN) like learning system whose connections with output neurons are adjustable, while the connections with and within hidden neurons are randomly fixed. Numerous applications have demonstrated the feasibility and high efficiency of ELM-like systems. It has, however, been open if this is true for any general applications. In this two-part paper, we conduct a comprehensive feasibility analysis of ELM. In Part I, we provide an answer to the question by theoretically justifying the following: 1) for some suitable activation functions, such as polynomials, Nadaraya-Watson and sigmoid functions, the ELM-like systems can attain the theoretical generalization bound of the FNNs with all connections adjusted, i.e., they do not degrade the generalization capability of the FNNs even when the connections with and within hidden neurons are randomly fixed; 2) the number of hidden neurons needed for an ELM-like system to achieve the theoretical bound can be estimated; and 3) whenever the activation function is taken as polynomial, the deduced hidden layer output matrix is of full column-rank, therefore the generalized inverse technique can be efficiently applied to yield the solution of an ELM-like system, and, furthermore, for the nonpolynomial case, the Tikhonov regularization can be applied to guarantee the weak regularity while not sacrificing the generalization capability. In Part II, however, we reveal a different aspect of the feasibility of ELM: there also exists some activation functions, which makes the corresponding ELM degrade the generalization capability. The obtained results underlie the feasibility and efficiency of ELM-like systems, and yield various generalizations and improvements of the systems as well.

extrem learn machin elm feedforward neural network fnn like learn system whose connect output neuron adjust connect within hidden neuron random fix numer applic demonstr feasibl high effici elmlik system howev open true general applic twopart paper conduct comprehens feasibl analysi elm part provid answer question theoret justifi follow suitabl activ function polynomi nadarayawatson sigmoid function elmlik system can attain theoret general bound fnns connect adjust ie degrad general capabl fnns even connect within hidden neuron random fix number hidden neuron need elmlik system achiev theoret bound can estim whenev activ function taken polynomi deduc hidden layer output matrix full columnrank therefor general invers techniqu can effici appli yield solut elmlik system furthermor nonpolynomi case tikhonov regular can appli guarante weak regular sacrif general capabl part ii howev reveal differ aspect feasibl elm also exist activ function make correspond elm degrad general capabl obtain result underli feasibl effici elmlik system yield various general improv system well