Addy's new love for ANNs

0

0 votes
Hard
Problem

Here is an Introduction to ANNs Neural Networks (also referred to as connectionist systems) are a computational approach which is based on a large collection of neural units loosely modeling the way the brain solves problems with large clusters of biological neurons connected by axons. Each neural unit is connected with many others, and links can be enforcing or inhibitory in their effect on the activation state of connected neural units. Each individual neural unit may have a summation function which combines the values of all its inputs together. There may be a threshold function or limiting function on each connection and on the unit itself such that it must surpass it before it can propagate to other neurons. These systems are self-learning and trained rather than explicitly programmed and excel in areas where the solution or feature detection is difficult to express in a traditional computer program. Neural networks typically consist of multiple layers or a cube design, and the signal path traverses from front to back. Back propagation is where the forward stimulation is used to reset weights on the "front" neural units and this is sometimes done in combination with training where the correct result is known. More modern networks are a bit more free flowing in terms of stimulation and inhibition with connections interacting in a much more chaotic and complex fashion. Dynamic neural networks are the most advanced in that they dynamically can, based on rules, form new connections and even new neural units while disabling others.

The goal of the neural network is to solve problems in the same way that the human brain would, although several neural networks are much more abstract. Modern neural network projects typically work with a few thousand to a few million neural units and millions of connections, which is still several orders of magnitude less complex than the human brain and closer to the computing power of a worm. New brain research often stimulates new patterns in neural networks. One new approach is using connections which span much further and link processing layers rather than always being localized to adjacent neurons. Other research being explored with the different types of signal over time that axons propagate which is more complex than simply on or off. Neural networks are based on real numbers, with the value of the core and of the axon typically being a representation between 0.0 and 1.

An interesting facet of these systems is that they are unpredictable in their success with self learning. After training some become great problem solvers and others don't perform as well. In order to train them several thousand cycles of interaction typically occur.

Like other machine learning methods – systems that learn from data – neural networks have been used to solve a wide variety of tasks, like computer vision and speech recognition, that are hard to solve using ordinary rule-based programming. Historically, the use of neural network models marked a directional shift in the late eighties from high-level (symbolic) artificial intelligence, characterized by expert systems with knowledge embodied in if-then rules, to low-level (sub-symbolic) machine learning, characterized by knowledge embodied in the parameters of a dynamical system.

Backpropagation and resurgence

A key advance that came later was the backpropagation algorithm which effectively solved the exclusive-or problem, and more generally the problem of quickly training multi-layer neural networks (Werbos 1975). In the mid-1980s, parallel distributed processing became popular under the name connectionism. The textbook by David E. Rumelhart and James McClelland(1986) provided a full exposition of the use of connectionism in computers to simulate neural processes. Neural networks, as used in artificial intelligence, have traditionally been viewed as simplified models of neural processing in the brain, even though the relation between this model and the biological architecture of the brain is debated; it's not clear to what degree artificial neural networks mirror brain function.

Support vector machines and other, much simpler methods such as linear classifiers gradually overtook neural networks in machine learning popularity. As computing power increased through the use of GPUs and distributed computing, image and visual recognition problems came to the forefront, neural networks were deployed again, on larger scales. This became called "deep learning" which is simply a re-branding of neural networks, though emphasizing the use of modern parallel hardware implementations.

The story has nothing to do with the question, The question is, Given an array A[1..N] and Given Q queries of two numbers 'l' and 'r' you have to find the Kachori factor of the segment of the array from A[l...r].

For every positive integer lets denote by P(x) the number of occurrences of x in the sub-array.

The Kachori factor is the sum of products x*(P(x)^k) for every positive integer x.

As it can be large, output it modulo 10^9 + 7

Input Format

The first line contains N,Q and K.

The second line contains n integers A1,A2,A3... AN.

Q lines follow each containing two integers and li & ri .

Constraints

1<=N<=200000

1<=Q<=200000

1<=Ai<=1000000

1<=K<=1000

Output Format

Output the answer to each query in a separate line.

Sample Input

7 5 4

1 8 8 8 2 6 6

4 4

2 5

6 7

1 3

1 7

Sample Output

8

650

96

129

747

Time Limit: 5
Memory Limit: 256
Source Limit:
Editor Image

?