.png)
.jpeg)

Data Engineer (Backend)

Job Description
Tessera provides ownership of the world’s most sought after NFTs! Working at Tessera, you will be building on the cutting edge of art, finance, and blockchain technology to help shape the future of digital collecting experiences.
We are looking for an exceptional data engineer to join our team. They will work closely with our CTO and web stack team to build our databases with an anticipation of future data needs, identify where to find that data, and build scalable backend infrastructure. The role will be expected to discover and aggregate data from different sources and blockchains using their skillsets which should include designing database architecture, data structures, pipelines, ETL processes, API/SDK development, and some backend software development as we build our foundation for rapid growth, future data science initiatives, and continued innovation in this exciting space! Experience with blockchain, NFTs, and DeFi is preferred.
Responsibilities
- Work with our development team to continually release technical enhancements
- Be responsible for identifying and anticipating our data needs and optimizing our backend software infrastructure, setting data priorities, and building a scalable foundation for our fast evolving site tessera.co
- Own the data lifecycle (from ingest and automated quality checks, to discovery and usage, and database setup)
- Build custom integrations between cloud-based or blockchain-based systems using APIs and other data sources
- Design efficient data structures, database schemas and ETL for long-term sustainability
- Ship high-quality, well-tested, secure, and maintainable code
- Write server scripts and APIs
- Routinely inspect server code for speed optimization and practical trade-offs
- Build a scalable NFT metadata backend infrastructure for tessera.co
- Incorporate data processing and workflow management tools into pipeline design (AWS etc.)
- Design, develop, and optimize data pipelines and backend services for real-time decisioning, reporting, data collection, and related features / functions
- Drive strategic technology decisions related to the appropriate data stores for the job (e.g., warehouses etc.)
- (Long-term) Architect, build, and launch new data models that provide intuitive analytics to the team
- Wrangle large-scale data sets from the blockchain and other site APIs (e.g., OpenSea)
- Build data expertise and own data quality for the pipelines you create
- Strong technical and non-technical communication abilities, both verbal and written
- Our fast-paced, agile development environment will require a penchant for task management and respect for efficient, best practice development principles as well!
Job Requirements
- 6+ months of tinkering and/or participating somewhere in Web3 (DeFi, NFTs, DAOs, etc.)
- Experience with querying and interacting with EVM & non-EVM Based blockchains
- Understanding of low-level idiosyncrasies of popular blockchains including Ethereum, Solana
- 3 or more years of relevant software experience in a data or backend-focused role
- Strong experience with two or more of the following languages: Python, SQL, Javascript, Scala
- Experience designing data structures, database schemas and ETL pipelines from scratch
- Experience with workflow systems such as Apache Airflow2 or more years of professional work experience on ETL pipeline implementation using services such as Pyspark, Glue, Dataflow, Lambda, Athena, S3, GCS, SNS, PubSub, Kinesis, etc.
- Experience with scalable cloud-based solutions
- A pro-active and autonomous team player, self-starter with the ability to anticipate future needs
- Capable of prioritizing multiple project in order to meet goals without management oversight
- Experience in communicating with users, other technical teams, and product management to understand requirements, describe data priorities and challenges, and technical design needs
- Excellent writing skills and the ability to drive via influence
- Proficiency in the English language, both written and verbal, sufficient for success in a remote and largely asynchronous work environment
- Strong attention to detail
Bonus points for
- BS/MS in Computer Science, Computer Engineering, or a related technical field
- Previous experience in a rapidly scaling start-up environment
- Professional work experience using real-time streaming systems (Kafka/Kafka Connect, Spark, Flink or AWS Kinesis)
- Previous experience building Analytics/BI systems from scratch
- Previous experience building large-scale data architectures
- Previous experience in BI or Data Science