Job Overview
Founded in 2014, Kaiko is a global fintech company, with offices in NYC, London, Paris, and Singapore. Rapidly growing, we are the leading crypto market data provider for financial institutions and enterprises in the digital asset space.
What We Do
Kaiko provides financial data products and solutions, across three main business units:
Market Data: “CEX” Centralized Exchanges Market Data: we collect, structure and distribute market data from 100+ cryptocurrency trading venues; “DEX” Decentralized Protocols Market Data: we run blockchain infrastructure in order to read, collect, engineer and distribute venue-level market data from DeFI protocols.Analytics: proprietary quantitative models & data solutions to price and assess risk.Indices: suite of mono-assets rates and benchmarks, as well as cross-assets indices.
Kaiko’s products are available worldwide on all networks and infrastructures: public APIs, private & on-premises networks; private & hybrid cloud set-ups; blockchain native (Kaiko oracles solution).
Additionally, Kaiko’s Research publications are read by thousands of industry professionals and cited in the world’s leading media organizations. We provide original insights and in-depth analysis on crypto markets using Kaiko’s data and products.
Who We Are
We’re a team of +80 (and growing) passionate individuals with a deep interest in building data solutions and supporting the growth of the digital finance economy. We’re proud of Kaiko’s talented team and are committed to our international representation and diversity. Our people and their values are the foundation of our continued success.
About The Role
The Challenge:
You will be joining a fast-paced engineering team made up of people with significant experience working with terabytes of data. We believe that everybody has something to bring to the table, and therefore put collaborative effort and team-work above all else (and not just from an engineering perspective).You will be able to work autonomously as an equally trusted member of the team, and participate in efforts such as:Addressing high availability problems: cross-region data replication, disaster recovery, etc.Addressing “big data” problems: 200+ millions of messages/day, 160B data points since 2010Improving our development workflow, continuous integration, continuous delivery and in a broader sense our team practicesExpanding our platform’s observability through monitoring, logging, alerting and tracing
What you’ll be doing:
Design, develop and deploy scalable and observable backend microservicesReflect on our storage, querying and aggregation capabilities, as well as the technologies required to meet our objectivesWork hand-in-hand with the business team on developing new features, addressing issues and extending the platform
Our tech stack:
Platforms (packaged in containers): Golang but we also recently started Rust for some specific use casesProtocols: gRPC, HTTP (phasing out in favor of gRPC), WebSocket (phasing out in favor of gRPC)Database systems: ClickHouse (main datastore), PostgreSQL (ACID workloads), ScyllaDBMessaging: KafkaCaching: RedisConfiguration management and provisioning: Terraform, AnsibleService deployment: Terraform, Nomad (plugged in Consul and Vault), KubernetesSecrets management and PKI: VaultService discovery: ConsulProxying: HAProxy, TraefikMonitoring: VictoriaMetrics, GrafanaAlerting: AlertManager, PagerDutyLogging: Vector, Loki
About You:
Significant experience as a Software/DevOps EngineerKnowledgeable about data ingestion pipelines and massive data queryingWorked with, in no particular order: microservices architecture, infrastructure as a code, self-managed services (eg. deploy and maintain our own databases), distributed services, server-side development, etc
You’ll notice that we don’t have any “hard” requirements in terms of development platforms or technologies: this is because we are primarily interested in people capable of adapting to an ever changing landscape of technical requirements, who learn fast and are not afraid to constantly push our technical boundaries.It is not uncommon for us to benchmark new technologies for a specific feature, or to change our infrastructure in a big way to better suit our needs.The most important skills for us revolve around two things:What we like to call “core” knowledge: what’s a software process, how does it interact with a machine’s or the network’s resources, what kind of constraints can we expect for certain workloads, etcHow fast you can adapt to a technology you didn’t know existed 10 minutes ago
In short, we are looking for someone able to spot early on that spending 10 days to migrate data to a more efficient schema is the better solution compared to scaling out a database cluster in a matter of minutes if we are looking to improve performance in the long term.Nice to haveExperience with data scraping over HTTP, WebSocket, and/or FIX ProtocolExperience developing financial product methodologies for indices, reference rates, and exchange ratesKnowledgeable about the technicalities of financial market data, such as the difference between: calls, puts, straddles, different types of bonds, swaps, CFD, CDS, options, futures, etcPersonal SkillsHonest: receiving and giving feedback is very important to youHumble: making new errors is an essential part of your journeyEmpathetic: you feel a sense of responsibility for all the team’s endeavors rather than focus on individual contributionsCommitted: as an equally important member of the team, you want to make yourself heard while respecting everybody’s point of viewFluent in written and spoken EnglishYou have the utmost respect for legacy code and infrastructure, with some occasional and perfectly understandable respectful complaints
What we offer: ● An attractive compensation package, including equity and healthcare.● An entrepreneurial environment with a lot of autonomy and responsibilities.● Opportunity to work with an internationally diverse team.● The hardware of your choice to help you deliver your best work.● Good perks (remote friendly, meal vouchers, multiple team events and staff surprises).
Talent Acquisition Process:● Introduction call (30mins)● Meeting with CTO (30m- 1h)● Tech discussion with members of the team (1h30)● Cross team interview (45m to 1h)
Interested? Please send us your CV to usAs our working language is English, we would appreciate it if you send us your application and any accompanying documents in English.
Diversity & Inclusion:
At Kaiko, we believe in the diversity of thought because we appreciate that this makes us stronger. Therefore, we encourage applications from everyone who can offer their unique experience to our collective achievements.
Job Detail
Related Jobs (4466)
-
Senior Demo Engineer – REMOTE on December 15, 2024
-
Senior Compiler Engineer – REMOTE on December 13, 2024
-
Senior Cryptography Engineer – REMOTE on December 12, 2024
-
AI & Data Scientist Intern – REMOTE on December 22, 2024
-
Machine Learning Engineer – REMOTE on December 21, 2024
-
Blockchain Engineer – REMOTE on December 19, 2024
-
Research and Development Engineer (DeFi, Distributed Systems) – REMOTE on December 16, 2024
-
Senior Data Analyst on December 6, 2024
-
Programmatic Senior Analyst on December 6, 2024
-
Data Analyst on December 6, 2024