Posted:23 hours ago|
Platform:
Work from Office
Full Time
What youll do As a Data Engineer in the IAM Data Lake Team, youll develop cutting-edge infrastructure for big data analytics on public cloud platforms, supporting high-concurrency customer-facing products. You should have independent experience building data lakes and warehouses, preferably on Azure. This role requires a hands-on approach to solving complex problems, collaborating with cross-functional teams, and continuously improving engineering practices. Youll work closely with product managers, developers, and platform engineers to deliver high-quality, timely product releases, focusing on features, performance, security, and accessibility. This position is an individual contributor role reporting to the Director of Engineering. Responsibility Drive design, implementation, testing and release of products Build big data pipelines and analytics infrastructure on Azure with Data Factory, Databricks, Event Hub, Data Explorer, Cosmos DB and Azure RDBMS platforms Build secure networking and reliable infrastructure for High Availability and Disaster Recovery Build big data streaming solutions with 100s of concurrent publishers and subscribers Collaborate closely with Product, Design, and Engineering teams to build new features Participate in an Agile environment using Scrum software development practices, code review, automated unit testing, end-to-end testing, continuous integration, and deployment Think about how to solve problems at scale and build fault-tolerant systems that leverage telemetry and metrics Investigate, fix, and maintain code as needed for production issues Operate high reliability, high uptime service and participate in on-call rotation Job Designation Hybrid: Employee divides their time between in-office and remote work. Access to an office location is required. (Frequency: Minimum 2 days per week; may vary by team but will be weekly in-office expectation) Positions at Docusign are assigned a job designation of either In Office, Hybrid or Remote and are specific to the role/job. Preferred job designations are not guaranteed when changing positions within Docusign. Docusign reserves the right to change a positions job designation depending on business needs and as permitted by local law. What you bring Basic BS degree in Computer Science, Engineering or equivalent 5+ years of experience within a software engineering related field 2+ years experience with Azure Data services , Azure Data Explorer, Azure Data Factory, Databricks, Event-hubs 2+ years experience with Data warehouse/ data modeling with RDBMS like SQL Server Proficiency in cloud platform deployment with Azure ARM templates or Terraform Experience using Git or other version control systems and CI-CD systems Focus on writing high quality code that is easy to be maintained by others Strong understanding and experience in agile methodologies Preferred Experience with Cosmos DB or No-SQL platforms Experience building large data lakes and data warehouses Proficiency in modern server- side development using modern programming languages like C# or others Building cloud apps on Azure Strong interest or documented experience in large scale microservice architectures on Kubernetes Proficiency in big data processing in Apache Spark with Python or Scala Proficiency in data streaming applications with Event Hub/Kafka Spark streaming Proficiency in data pipeline orchestration with Data Factory or similar A track record of being a self-starter - Individual/team responsibility is our main driver in the development work Life at Docusign Working here Docusign is committed to building trust and making the world more agreeable for our employees, customers and the communities in which we live and work. You can count on us to listen, be honest, and try our best to do what s right, every day. At Docusign, everything is equal. We each have a responsibility to ensure every team member has an equal opportunity to succeed, to be heard, to exchange ideas openly, to build lasting relationships, and to do the work of their life. Best of all, you will be able to feel deep pride in the work you do, because your contribution helps us make the world better than we found it. And for that, you ll be loved by us, our customers, and the world in which we live. Accommodation Docusign is committed to providing reasonable accommodations for qualified individuals with disabilities in our job application procedures. for assistance. Applicant and Candidate Privacy Notice #LI-Hybrid #LI-SV6 ","qualifications":" Basic BS degree in Computer Science, Engineering or equivalent 5+ years of experience within a software engineering related field 2+ years experience with Azure Data services , Azure Data Explorer, Azure Data Factory, Databricks, Event-hubs 2+ years experience with Data warehouse/ data modeling with RDBMS like SQL Server Proficiency in cloud platform deployment with Azure ARM templates or Terraform Experience using Git or other version control systems and CI-CD systems Focus on writing high quality code that is easy to be maintained by others Strong understanding and experience in agile methodologies Preferred Experience with Cosmos DB or No-SQL platforms Experience building large data lakes and data warehouses Proficiency in modern server- side development using modern programming languages like C# or others Building cloud apps on Azure Strong interest or documented experience in large scale microservice architectures on Kubernetes Proficiency in big data processing in Apache Spark with Python or Scala Proficiency in data streaming applications with Event Hub/Kafka Spark streaming Proficiency in data pipeline orchestration with Data Factory or similar A track record of being a self-starter - Individual/team responsibility is our main driver in the development work ","responsibilities":" As a Data Engineer in the IAM Data Lake Team, youll develop cutting-edge infrastructure for big data analytics on public cloud platforms, supporting high-concurrency customer-facing products. You should have independent experience building data lakes and warehouses, preferably on Azure. This role requires a hands-on approach to solving complex problems, collaborating with cross-functional teams, and continuously improving engineering practices. Youll work closely with product managers, developers, and platform engineers to deliver high-quality, timely product releases, focusing on features, performance, security, and accessibility. This position is an individual contributor role reporting to the Director of Engineering. Responsibility Drive design, implementation, testing and release of products Build big data pipelines and analytics infrastructure on Azure with Data Factory, Databricks, Event Hub, Data Explorer, Cosmos DB and Azure RDBMS platforms Build secure networking and reliable infrastructure for High Availability and Disaster Recovery Build big data streaming solutions with 100s of concurrent publishers and subscribers Collaborate closely with Product, Design, and Engineering teams to build new features Participate in an Agile environment using Scrum software development practices, code review, automated unit testing, end-to-end testing, continuous integration, and deployment Think about how to solve problems at scale and build fault-tolerant systems that leverage telemetry and metrics Investigate, fix, and maintain code as needed for production issues Operate high reliability, high uptime service and participate in on-call rotation
Docusign
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Bengaluru
12.0 - 22.0 Lacs P.A.
Bengaluru
4.17 - 6.9 Lacs P.A.
Bengaluru
Experience: Not specified
Salary: Not disclosed
Pune, Maharashtra, India
Experience: Not specified
Salary: Not disclosed
Chennai, Bengaluru, Mumbai (All Areas)
50.0 - 90.0 Lacs P.A.
Pune, Maharashtra, India
Salary: Not disclosed
Kolkata, West Bengal, India
Experience: Not specified
Salary: Not disclosed
Hyderabad, Telangana, India
Salary: Not disclosed
Bengaluru
20.0 - 25.0 Lacs P.A.
Kolkata, West Bengal, India
Experience: Not specified
Salary: Not disclosed