Test run of our automatic oil and gold based Momentum Formula trading system has been running since February. This paper presents the results of the system.
|Number of test days||138 (2/7/2019)|
|Number of trades||74|
|Value of capital 15/2/2019||1 000 USD|
|Value of capital 2/7/2019||1 158 USD|
|Profit (YTD)||15.8 %|
|Risk (Maximum Drop)||11.42 %|
|Profit per annum||47 %|
|SharpeRatio (number of profit units per risk unit)||2.18|
Orders are executed through Interactive Brokers.
The results are displayed including errors that have occurred during the testing period (for example, the system crashed over the weekend).
We have further improved the system during the reporting period. We’ve eliminated infrastructure bugs, and we’ve enriched the model with new information to eliminate sinks. Below you can see the slumps achieved by the reported strategy (blue curve) and the slump in the improved version that we will soon deploy (orange curve).
The risk level of this system in the case of oil is about 5% (the strategy had the largest value of the property dicrease about 5%). The underlying asset (crude oil) for the reporting period (2015 – end 2018) shows a maximum fall of around 50% (decline episode June 2015 – February 2016, followed by September 2018 – December 2018).
Head of FRS Development, Michal Dufek
The article from businessinfo.cz
This article is only available in Czech version.
The article from tyinternety.cz
This article is only available in Czech version.
The article from touchit.sk
This article is only available in Czech version.
In today’s article, we will have a look at the price development analysis which has been known for centuries – technical analysis. We will, however, look at it from a different point of view than usual – via market logic which is hidden behind the technical analysis. Within our applications for revealing and using trading strategies, we have been developing PatternLab software whose foundations are based on the technical analysis.
Patterns in price movement behaviour – where do they come from and how to use them?
The basis of systems used for trading on financial markets is usually a particular pattern in price movement behaviour, which tends to be repeated and it is, therefore, becomes something which can be called a systematic error. These errors (interferences) in data are created by specific repeated behaviour of strong market participants; to understand them, it is important to have at least basic knowledge of the market microstructure (= understanding terms such as market participant, command types, order book and depth of market).
An example of a specific price behaviour pattern (double bottom, technical analysis)
Repeated price patterns are an accompanying phenomenon which reflects behaviour (buy/sell) of a market participant which has the power to move the market price of the asset being bought/sold when performing their buys and sells, where this accompanying phenomenon does not have to be noticed by the very market participant at all. As an example, let’s take an imaginary pension fund which at 11:00 adjusts its positions of “S” company stock, for instance, it will liquidate its positions (selling them). Liquidation of large positions is often administered by easy algorithms which set the timing of sales and compartmentalise them in such a way not to create price pressure, which would be damaging the very order executor (they would be selling for a lower price). Technically speaking, the market participant – the pension fund, sends a considerable amount of sale orders to the stock-exchange order book, thereby creates pressure on the price drop. Considering the fact that they know about the situation, the orders are sent to the order book in time intervals so that the impact of their doing on the asset price is as low as possible. The price pattern, which you can see in the picture above, is created as a side effect of this process.
Such systematic errors are created in different time frameworks, in different forms. There can be, of course, various reasons for the creation of such a price pattern, some of which can be exactly observed (in contrast to our example where the process can be observed but the specific cause could be identified with large difficulties), for example, reaction to the announcement of a company’s quarterly results. The table below shows an example of changes in some macroeconomic variables which can lead to the creation of a price pattern.
Changes in economic data
The business idea, based on the observation of price patterns, is very simple: if I observe a particular pattern (systematic error) in the price development, I receive market information about an emerging familiar situation and I know that I am observing a process in which I know what exactly is happening on the market. Traders call this situation “find your market”. Using this situation is intuitive – if I know who, why and what actions is someone performing in the market, I can place my own trade order according to the situation in a way so that I can participate in the price movement. As I already outlined in one of the previous posts, using this trading opportunity, we do not rely on outputs of prediction models but it is a reaction on the current situation, or more precisely, revealing and using a newly-emerged trading opportunity.
Our team are developing an application which can help you when searching your preferred price patterns by scanning thousands of titles and testing whether the occurrence of these price patterns is statistically significant, which will save you a lot of time on studying and scanning markets. In addition, the application is equipped with a function which will instantly inform you every time whenever your tracked market records the preferred price pattern, it will, thereby, ensure you never miss a trading opportunity. After having created the overall application workflow, the research on patterns being shown in the application is reaching the peak. Once the library with predefined patterns has been finished, the final production environment will be created.
If you are interested in more information about our service, do not hesitate to contact us.
In today’s post, we will introduce our MTA (Multicriterial Text Analysis) software. The MTA product significantly helps users with decisions in the area of shopping for various products and services.
The product aims to help users get their head around the large amounts of opinions published on the internet on specific goods or services which they would like to buy or use. User reviews and ratings are scattered on various discussion forums, product review websites and portals dedicated to specific areas. It is difficult and time-consuming for an ordinary user to look up this information, familiarise with it and make own opinion on it.
To collect data, we use a set of tools (crawlers) to download user reviews and articles about the selected group of products or services. These crawlers are adjusted to the structure of defined websites from which they collect relevant data that can be helpful for analysis of topics and sentiments. We have a set of crawlers through which we have already downloaded more than a million user reviews.
When collecting data, we usually face several problems . One of the biggest ones is related to varied ways of naming products on different websites. Even though it is an identical product, there are distinctions in the name, which makes the product identification complicated. For instance, the product “Canon EOS 600D” is listed in all of the following sales names:
- “DSLR Canon EOS 600D camera”,
- “Canon EOS 600D SLR digital camera”,
- “Digital camera Canon EOS 600D SLR (18 mpx, 7,6 cm (3″) flip screen, Full HD”
- “Digital DSLR camera Canon EOS 600D (18 megapixels, 7,6cm (3inches) display, APS-C CMOS sensor, WLAN with NFC, Full HD, Digic 7) kit incl. EF-S 18-55mm, 1:4,0 – 5,6 IS STM, black”
It is important to correctly recognise which names identify the same product and connect the product published reviews. We use methods of machine learning in this process.
For further analysis, it is necessary to modify the obtained reviews. The first step is to divide them into individual sentences which usually include independent topics. Furthermore, we transform words into their basic form and remove diacritics. Additionally, it is beneficial to remove words which do not bare any required information value (such as prepositions, conjunctions etc.). To do this, we use our own POS analyser which assigns the word class to words in the sentence and we also use a dataset with stop words created by our own means. Documents edited this way are transformed into vector form, using Tf-idf methodology.
To analyse large amounts of unstructured data, we use methods of machine learning. Using these, we identify the most discussed topics in the data and we determine reviewers’ positive or negative sentiment towards individual features of the products. Using cluster methods (k-means), we divide reviews into clusters with the same topics. We are able to successfully identify clusters with a high degree of internal integrity where identified topics highly correlate with main parameters of the examined product segment. These clusters, created for a particular segment, based on professional articles, are further used to classify reviews of the individual products.
The easiest way how we present results of text analysis is a static report. This output includes product names, their discussed features and statistics on how often are the listed features perceived positively or negatively.
* excellent image sensor resolution,
* excellent focus sensitivity,
* comfortable grip,
* unrivalled image quality,
* rear buttons backlit,
* 4k uhd video 1920 x 1080 / record slow motion,
* pleasantly surprised with nikon d850,
* well-managed noise level 6400,
* gb high consumption,
* more expensive lenses,
* in order to utilise potential, it’s necessary to have adequate lenses, which means the best ones available,
* price quality doesn’t come cheap.
We are currently developing an interactive website application as well as an app for mobile devices. At the same time, for easy integration into already existing solutions, there will be API with regularly updated data.
Do not hesitate to contact us for more information or to provide us with feedback.