Jobs
Interviews

305 Sharding Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

All roles at JumpCloud are Remote unless otherwise specified in the Job Description. About JumpCloud JumpCloud® delivers a unified open directory platform that makes it easy to securely manage identities, devices, and access across your organization. With JumpCloud, IT teams and MSPs enable users to work securely from anywhere and manage their Windows, Apple, Linux, and Android devices from a single platform. JumpCloud is IT Simplified. About the Role We’re looking for a Senior Software Engineer to join JumpCloud’s Data Engineering team. Data Engineering's Vision & Mission: Vision: Data to drive JumpCloud and our Customers. Mission: To put in place foundational technology and process to up-level the data capabilities of our Product and our Data Warehouse/Lakehouse. We are introducing an Event Based Architecture , developing and refining a data model that supports JumpCloud’s growth strategy and modernizing our Data Warehouse. A successful data engineer will exhibit an entrepreneurial spirit and enjoy tackling data engineering problems that most other people cannot solve, as well as shaping the future capabilities of JumpCloud’s data engineering, performance reporting and data governance. Come be a part of an exciting new team where you will be able to work on challenging projects, rich data sets, and develop valuable skills. This role involves taking full ownership of our core mongoDB and the supporting services. It will require managing, monitoring and optimizing mongoDB clusters. Responsibilities: Design, implement, and maintain scalable and reliable data pipelines f or ingesting, transforming, and loading data into and out of MongoDB Manage, monitor, and optimize MongoDB clusters for performance, availability, and security, including sharding, replication, and backup/recovery strategies Develop and deploy RESTful APIs and microservices that interact with MongoDB, enabling data access and manipulation for various applications Collaborate closely with software engineers, data scientists, and product managers to understand data requirements and translate them into technical solutions Implement data governance, data quality, and data security best practices for MongoDB environments Troubleshoot and resolve database-related issues promptly and efficiently Participate in code reviews and contribute to architectural discussions to ensure high-quality and scalable solutions Stay up-to-date with the latest trends and technologies in the NoSQL database space, particularly with MongoDB We’re looking for: 8-12 years of experience as a Software/Data Engineer, Database Administrator, or similar role with a strong focus on MongoDB. Proficient in designing, implementing, and managing MongoDB sharded clusters and replica sets 5-8 years of experience in at least one of the following languages: Node.js (preferred), Go, Python, or Java. 1-3 years of experience of technical leadership (leading, coaching, and/or mentoring junior team members) Experience developing and deploying microservices or APIs that interact with databases Solid understanding of database concepts (indexing, query optimization, data modeling, ACID properties for relational vs. BASE for NoSQL) Familiarity with cloud platforms (AWS, Azure, GCP) Experience with version control systems (e.g., Git) Excellent problem-solving, analytical, and communication skills Willingness to learn and embrace new technologies, languages, and frameworks (we will test your skills with a take home exercise) Comfortable with Linux or OSX as a desktop development environment Strong team player that wants to win together Strong communication skills Bonus points if you have: Experience with technologies like: kafka, ksql, kafka connect, postgresql, ELK Experience building data pipelines and lakes in AWS Data operations experience using tools such as Terraform, CloudFormation and/or Salt Where you’ll be working/Location: JumpCloud is committed to being Remote First, meaning that you are able to work remotely within the country noted in the Job Description. You must be located in and authorized to work in the country noted in the job description to be considered for this role. Please note: There is an expectation that our engineers participate in on-call shifts. You will be expected commit to being ready and able to respond during your assigned shift, so that alerts don't go unaddressed. Language: JumpCloud has teams in 15+ countries around the world and conducts our internal business in English. The interview and any additional screening process will take place primarily in English. To be considered for a role at JumpCloud, you will be required to speak and write in English fluently. Any additional language requirements will be included in the details of the job description. Why JumpCloud? If you thrive working in a fast, SaaS-based environment and you are passionate about solving challenging technical problems, we look forward to hearing from you! JumpCloud is an incredible place to share and grow your expertise! You’ll work with amazing talent across each department who are passionate about our mission. We’re out of the box thinkers, so your unique ideas and approaches for conceiving a product and/or feature will be welcome. You’ll have a voice in the organization as you work with a seasoned executive team, a supportive board and in a proven market that our customers are excited about. One of JumpCloud's three core values is to “Build Connections.” To us that means creating " human connection with each other regardless of our backgrounds, orientations, geographies, religions, languages, gender, race, etc. We care deeply about the people that we work with and want to see everyone succeed." - Rajat Bhargava, CEO Please submit your résumé and brief explanation about yourself and why you would be a good fit for JumpCloud. Please note JumpCloud is not accepting third party resumes at this time. JumpCloud is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. Scam Notice: Please be aware that there are individuals and organizations that may attempt to scam job seekers by offering fraudulent employment opportunities in the name of JumpCloud. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. Please note that JumpCloud will never ask for any personal account information, such as credit card details or bank account numbers, during the recruitment process. Additionally, JumpCloud will never send you a check for any equipment prior to employment. All communication related to interviews and offers from our recruiters and hiring managers will come from official company email addresses (@jumpcloud.com) and will never ask for any payment, fee to be paid or purchases to be made by the job seeker. If you are contacted by anyone claiming to represent JumpCloud and you are unsure of their authenticity, please do not provide any personal/financial information and contact us immediately at recruiting@jumpcloud.com with the subject line "Scam Notice" #BI-Remote

Posted 1 month ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

All roles at JumpCloud are Remote unless otherwise specified in the Job Description. About JumpCloud JumpCloud® delivers a unified open directory platform that makes it easy to securely manage identities, devices, and access across your organization. With JumpCloud, IT teams and MSPs enable users to work securely from anywhere and manage their Windows, Apple, Linux, and Android devices from a single platform. JumpCloud is IT Simplified. About the Role We’re looking for a Software Engineer to join JumpCloud’s Data Engineering team. Data Engineering's Vision & Mission: Vision: Data to drive JumpCloud and our Customers. Mission: To put in place foundational technology and process to up-level the data capabilities of our Product and our Data Warehouse/Lakehouse. We are introducing an Event Based Architecture , developing and refining a data model that supports JumpCloud’s growth strategy and modernizing our Data Warehouse. A successful data engineer will exhibit an entrepreneurial spirit and enjoy tackling data engineering problems that most other people cannot solve, as well as shaping the future capabilities of JumpCloud’s data engineering, performance reporting and data governance. Come be a part of an exciting new team where you will be able to work on challenging projects, rich data sets, and develop valuable skills. This role involves taking full ownership of our core mongoDB and the supporting services. It will require managing, monitoring and optimizing mongoDB clusters. Responsibilities: Design, implement, and maintain scalable and reliable data pipelines f or ingesting, transforming, and loading data into and out of MongoDB Manage, monitor, and optimize MongoDB clusters for performance, availability, and security, including sharding, replication, and backup/recovery strategies Develop and deploy RESTful APIs and microservices that interact with MongoDB, enabling data access and manipulation for various applications Collaborate closely with software engineers, data scientists, and product managers to understand data requirements and translate them into technical solutions Implement data governance, data quality, and data security best practices for MongoDB environments Troubleshoot and resolve database-related issues promptly and efficiently Participate in code reviews and contribute to architectural discussions to ensure high-quality and scalable solutions Stay up-to-date with the latest trends and technologies in the NoSQL database space, particularly with MongoDB We’re looking for: 4-6 years of experience as a Data Engineer, Database Administrator, or similar role with a strong focus on MongoDB Proficient in designing, implementing, and managing MongoDB shared clusters and replica sets 3-6 years of experience in at least one of the following languages: Node.js (preferred), Go, Python, or Java. Experience developing and deploying microservices or APIs that interact with databases Solid understanding of database concepts (indexing, query optimization, data modeling, ACID properties for relational vs. BASE for NoSQL) Familiarity with cloud platforms (AWS, Azure, GCP) Experience with version control systems (e.g., Git) Excellent problem-solving, analytical, and communication skills Willingness to learn and embrace new technologies, languages, and frameworks (we will test your skills with a take home exercise) Comfortable with Linux or OSX as a desktop development environment Strong team player that wants to win together Strong communication skills Bonus points if you have: Experience with technologies like: kafka, ksql, kafka connect, postgresql, ELK Experience building data pipelines and lakes in AWS Data operations experience using tools such as Terraform, CloudFormation and/or Salt Where you’ll be working/Location: JumpCloud is committed to being Remote First, meaning that you are able to work remotely within the country noted in the Job Description. You must be located in and authorized to work in the country noted in the job description to be considered for this role. Please note: There is an expectation that our engineers participate in on-call shifts. You will be expected commit to being ready and able to respond during your assigned shift, so that alerts don't go unaddressed. Language: JumpCloud has teams in 15+ countries around the world and conducts our internal business in English. The interview and any additional screening process will take place primarily in English. To be considered for a role at JumpCloud, you will be required to speak and write in English fluently. Any additional language requirements will be included in the details of the job description. Why JumpCloud? If you thrive working in a fast, SaaS-based environment and you are passionate about solving challenging technical problems, we look forward to hearing from you! JumpCloud is an incredible place to share and grow your expertise! You’ll work with amazing talent across each department who are passionate about our mission. We’re out of the box thinkers, so your unique ideas and approaches for conceiving a product and/or feature will be welcome. You’ll have a voice in the organization as you work with a seasoned executive team, a supportive board and in a proven market that our customers are excited about. One of JumpCloud's three core values is to “Build Connections.” To us that means creating " human connection with each other regardless of our backgrounds, orientations, geographies, religions, languages, gender, race, etc. We care deeply about the people that we work with and want to see everyone succeed." - Rajat Bhargava, CEO Please submit your résumé and brief explanation about yourself and why you would be a good fit for JumpCloud. Please note JumpCloud is not accepting third party resumes at this time. JumpCloud is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. Scam Notice: Please be aware that there are individuals and organizations that may attempt to scam job seekers by offering fraudulent employment opportunities in the name of JumpCloud. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. Please note that JumpCloud will never ask for any personal account information, such as credit card details or bank account numbers, during the recruitment process. Additionally, JumpCloud will never send you a check for any equipment prior to employment. All communication related to interviews and offers from our recruiters and hiring managers will come from official company email addresses (@jumpcloud.com) and will never ask for any payment, fee to be paid or purchases to be made by the job seeker. If you are contacted by anyone claiming to represent JumpCloud and you are unsure of their authenticity, please do not provide any personal/financial information and contact us immediately at recruiting@jumpcloud.com with the subject line "Scam Notice" #BI-Remote

Posted 1 month ago

Apply

4.0 years

0 Lacs

Greater Delhi Area

Remote

All roles at JumpCloud are Remote unless otherwise specified in the Job Description. About JumpCloud JumpCloud® delivers a unified open directory platform that makes it easy to securely manage identities, devices, and access across your organization. With JumpCloud, IT teams and MSPs enable users to work securely from anywhere and manage their Windows, Apple, Linux, and Android devices from a single platform. JumpCloud is IT Simplified. About the Role We’re looking for a Software Engineer to join JumpCloud’s Data Engineering team. Data Engineering's Vision & Mission: Vision: Data to drive JumpCloud and our Customers. Mission: To put in place foundational technology and process to up-level the data capabilities of our Product and our Data Warehouse/Lakehouse. We are introducing an Event Based Architecture , developing and refining a data model that supports JumpCloud’s growth strategy and modernizing our Data Warehouse. A successful data engineer will exhibit an entrepreneurial spirit and enjoy tackling data engineering problems that most other people cannot solve, as well as shaping the future capabilities of JumpCloud’s data engineering, performance reporting and data governance. Come be a part of an exciting new team where you will be able to work on challenging projects, rich data sets, and develop valuable skills. This role involves taking full ownership of our core mongoDB and the supporting services. It will require managing, monitoring and optimizing mongoDB clusters. Responsibilities: Design, implement, and maintain scalable and reliable data pipelines f or ingesting, transforming, and loading data into and out of MongoDB Manage, monitor, and optimize MongoDB clusters for performance, availability, and security, including sharding, replication, and backup/recovery strategies Develop and deploy RESTful APIs and microservices that interact with MongoDB, enabling data access and manipulation for various applications Collaborate closely with software engineers, data scientists, and product managers to understand data requirements and translate them into technical solutions Implement data governance, data quality, and data security best practices for MongoDB environments Troubleshoot and resolve database-related issues promptly and efficiently Participate in code reviews and contribute to architectural discussions to ensure high-quality and scalable solutions Stay up-to-date with the latest trends and technologies in the NoSQL database space, particularly with MongoDB We’re looking for: 4-6 years of experience as a Data Engineer, Database Administrator, or similar role with a strong focus on MongoDB Proficient in designing, implementing, and managing MongoDB shared clusters and replica sets 3-6 years of experience in at least one of the following languages: Node.js (preferred), Go, Python, or Java. Experience developing and deploying microservices or APIs that interact with databases Solid understanding of database concepts (indexing, query optimization, data modeling, ACID properties for relational vs. BASE for NoSQL) Familiarity with cloud platforms (AWS, Azure, GCP) Experience with version control systems (e.g., Git) Excellent problem-solving, analytical, and communication skills Willingness to learn and embrace new technologies, languages, and frameworks (we will test your skills with a take home exercise) Comfortable with Linux or OSX as a desktop development environment Strong team player that wants to win together Strong communication skills Bonus points if you have: Experience with technologies like: kafka, ksql, kafka connect, postgresql, ELK Experience building data pipelines and lakes in AWS Data operations experience using tools such as Terraform, CloudFormation and/or Salt Where you’ll be working/Location: JumpCloud is committed to being Remote First, meaning that you are able to work remotely within the country noted in the Job Description. You must be located in and authorized to work in the country noted in the job description to be considered for this role. Please note: There is an expectation that our engineers participate in on-call shifts. You will be expected commit to being ready and able to respond during your assigned shift, so that alerts don't go unaddressed. Language: JumpCloud has teams in 15+ countries around the world and conducts our internal business in English. The interview and any additional screening process will take place primarily in English. To be considered for a role at JumpCloud, you will be required to speak and write in English fluently. Any additional language requirements will be included in the details of the job description. Why JumpCloud? If you thrive working in a fast, SaaS-based environment and you are passionate about solving challenging technical problems, we look forward to hearing from you! JumpCloud is an incredible place to share and grow your expertise! You’ll work with amazing talent across each department who are passionate about our mission. We’re out of the box thinkers, so your unique ideas and approaches for conceiving a product and/or feature will be welcome. You’ll have a voice in the organization as you work with a seasoned executive team, a supportive board and in a proven market that our customers are excited about. One of JumpCloud's three core values is to “Build Connections.” To us that means creating " human connection with each other regardless of our backgrounds, orientations, geographies, religions, languages, gender, race, etc. We care deeply about the people that we work with and want to see everyone succeed." - Rajat Bhargava, CEO Please submit your résumé and brief explanation about yourself and why you would be a good fit for JumpCloud. Please note JumpCloud is not accepting third party resumes at this time. JumpCloud is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. Scam Notice: Please be aware that there are individuals and organizations that may attempt to scam job seekers by offering fraudulent employment opportunities in the name of JumpCloud. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. Please note that JumpCloud will never ask for any personal account information, such as credit card details or bank account numbers, during the recruitment process. Additionally, JumpCloud will never send you a check for any equipment prior to employment. All communication related to interviews and offers from our recruiters and hiring managers will come from official company email addresses (@jumpcloud.com) and will never ask for any payment, fee to be paid or purchases to be made by the job seeker. If you are contacted by anyone claiming to represent JumpCloud and you are unsure of their authenticity, please do not provide any personal/financial information and contact us immediately at recruiting@jumpcloud.com with the subject line "Scam Notice" #BI-Remote

Posted 1 month ago

Apply

8.0 years

0 Lacs

Mumbai Metropolitan Region

Remote

All roles at JumpCloud are Remote unless otherwise specified in the Job Description. About JumpCloud JumpCloud® delivers a unified open directory platform that makes it easy to securely manage identities, devices, and access across your organization. With JumpCloud, IT teams and MSPs enable users to work securely from anywhere and manage their Windows, Apple, Linux, and Android devices from a single platform. JumpCloud is IT Simplified. About the Role We’re looking for a Senior Software Engineer to join JumpCloud’s Data Engineering team. Data Engineering's Vision & Mission: Vision: Data to drive JumpCloud and our Customers. Mission: To put in place foundational technology and process to up-level the data capabilities of our Product and our Data Warehouse/Lakehouse. We are introducing an Event Based Architecture , developing and refining a data model that supports JumpCloud’s growth strategy and modernizing our Data Warehouse. A successful data engineer will exhibit an entrepreneurial spirit and enjoy tackling data engineering problems that most other people cannot solve, as well as shaping the future capabilities of JumpCloud’s data engineering, performance reporting and data governance. Come be a part of an exciting new team where you will be able to work on challenging projects, rich data sets, and develop valuable skills. This role involves taking full ownership of our core mongoDB and the supporting services. It will require managing, monitoring and optimizing mongoDB clusters. Responsibilities: Design, implement, and maintain scalable and reliable data pipelines f or ingesting, transforming, and loading data into and out of MongoDB Manage, monitor, and optimize MongoDB clusters for performance, availability, and security, including sharding, replication, and backup/recovery strategies Develop and deploy RESTful APIs and microservices that interact with MongoDB, enabling data access and manipulation for various applications Collaborate closely with software engineers, data scientists, and product managers to understand data requirements and translate them into technical solutions Implement data governance, data quality, and data security best practices for MongoDB environments Troubleshoot and resolve database-related issues promptly and efficiently Participate in code reviews and contribute to architectural discussions to ensure high-quality and scalable solutions Stay up-to-date with the latest trends and technologies in the NoSQL database space, particularly with MongoDB We’re looking for: 8-12 years of experience as a Software/Data Engineer, Database Administrator, or similar role with a strong focus on MongoDB. Proficient in designing, implementing, and managing MongoDB sharded clusters and replica sets 5-8 years of experience in at least one of the following languages: Node.js (preferred), Go, Python, or Java. 1-3 years of experience of technical leadership (leading, coaching, and/or mentoring junior team members) Experience developing and deploying microservices or APIs that interact with databases Solid understanding of database concepts (indexing, query optimization, data modeling, ACID properties for relational vs. BASE for NoSQL) Familiarity with cloud platforms (AWS, Azure, GCP) Experience with version control systems (e.g., Git) Excellent problem-solving, analytical, and communication skills Willingness to learn and embrace new technologies, languages, and frameworks (we will test your skills with a take home exercise) Comfortable with Linux or OSX as a desktop development environment Strong team player that wants to win together Strong communication skills Bonus points if you have: Experience with technologies like: kafka, ksql, kafka connect, postgresql, ELK Experience building data pipelines and lakes in AWS Data operations experience using tools such as Terraform, CloudFormation and/or Salt Where you’ll be working/Location: JumpCloud is committed to being Remote First, meaning that you are able to work remotely within the country noted in the Job Description. You must be located in and authorized to work in the country noted in the job description to be considered for this role. Please note: There is an expectation that our engineers participate in on-call shifts. You will be expected commit to being ready and able to respond during your assigned shift, so that alerts don't go unaddressed. Language: JumpCloud has teams in 15+ countries around the world and conducts our internal business in English. The interview and any additional screening process will take place primarily in English. To be considered for a role at JumpCloud, you will be required to speak and write in English fluently. Any additional language requirements will be included in the details of the job description. Why JumpCloud? If you thrive working in a fast, SaaS-based environment and you are passionate about solving challenging technical problems, we look forward to hearing from you! JumpCloud is an incredible place to share and grow your expertise! You’ll work with amazing talent across each department who are passionate about our mission. We’re out of the box thinkers, so your unique ideas and approaches for conceiving a product and/or feature will be welcome. You’ll have a voice in the organization as you work with a seasoned executive team, a supportive board and in a proven market that our customers are excited about. One of JumpCloud's three core values is to “Build Connections.” To us that means creating " human connection with each other regardless of our backgrounds, orientations, geographies, religions, languages, gender, race, etc. We care deeply about the people that we work with and want to see everyone succeed." - Rajat Bhargava, CEO Please submit your résumé and brief explanation about yourself and why you would be a good fit for JumpCloud. Please note JumpCloud is not accepting third party resumes at this time. JumpCloud is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. Scam Notice: Please be aware that there are individuals and organizations that may attempt to scam job seekers by offering fraudulent employment opportunities in the name of JumpCloud. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. Please note that JumpCloud will never ask for any personal account information, such as credit card details or bank account numbers, during the recruitment process. Additionally, JumpCloud will never send you a check for any equipment prior to employment. All communication related to interviews and offers from our recruiters and hiring managers will come from official company email addresses (@jumpcloud.com) and will never ask for any payment, fee to be paid or purchases to be made by the job seeker. If you are contacted by anyone claiming to represent JumpCloud and you are unsure of their authenticity, please do not provide any personal/financial information and contact us immediately at recruiting@jumpcloud.com with the subject line "Scam Notice" #BI-Remote

Posted 1 month ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

Remote

All roles at JumpCloud are Remote unless otherwise specified in the Job Description. About JumpCloud JumpCloud® delivers a unified open directory platform that makes it easy to securely manage identities, devices, and access across your organization. With JumpCloud, IT teams and MSPs enable users to work securely from anywhere and manage their Windows, Apple, Linux, and Android devices from a single platform. JumpCloud is IT Simplified. About the Role We’re looking for a Senior Software Engineer to join JumpCloud’s Data Engineering team. Data Engineering's Vision & Mission: Vision: Data to drive JumpCloud and our Customers. Mission: To put in place foundational technology and process to up-level the data capabilities of our Product and our Data Warehouse/Lakehouse. We are introducing an Event Based Architecture , developing and refining a data model that supports JumpCloud’s growth strategy and modernizing our Data Warehouse. A successful data engineer will exhibit an entrepreneurial spirit and enjoy tackling data engineering problems that most other people cannot solve, as well as shaping the future capabilities of JumpCloud’s data engineering, performance reporting and data governance. Come be a part of an exciting new team where you will be able to work on challenging projects, rich data sets, and develop valuable skills. This role involves taking full ownership of our core mongoDB and the supporting services. It will require managing, monitoring and optimizing mongoDB clusters. Responsibilities: Design, implement, and maintain scalable and reliable data pipelines f or ingesting, transforming, and loading data into and out of MongoDB Manage, monitor, and optimize MongoDB clusters for performance, availability, and security, including sharding, replication, and backup/recovery strategies Develop and deploy RESTful APIs and microservices that interact with MongoDB, enabling data access and manipulation for various applications Collaborate closely with software engineers, data scientists, and product managers to understand data requirements and translate them into technical solutions Implement data governance, data quality, and data security best practices for MongoDB environments Troubleshoot and resolve database-related issues promptly and efficiently Participate in code reviews and contribute to architectural discussions to ensure high-quality and scalable solutions Stay up-to-date with the latest trends and technologies in the NoSQL database space, particularly with MongoDB We’re looking for: 8-12 years of experience as a Software/Data Engineer, Database Administrator, or similar role with a strong focus on MongoDB. Proficient in designing, implementing, and managing MongoDB sharded clusters and replica sets 5-8 years of experience in at least one of the following languages: Node.js (preferred), Go, Python, or Java. 1-3 years of experience of technical leadership (leading, coaching, and/or mentoring junior team members) Experience developing and deploying microservices or APIs that interact with databases Solid understanding of database concepts (indexing, query optimization, data modeling, ACID properties for relational vs. BASE for NoSQL) Familiarity with cloud platforms (AWS, Azure, GCP) Experience with version control systems (e.g., Git) Excellent problem-solving, analytical, and communication skills Willingness to learn and embrace new technologies, languages, and frameworks (we will test your skills with a take home exercise) Comfortable with Linux or OSX as a desktop development environment Strong team player that wants to win together Strong communication skills Bonus points if you have: Experience with technologies like: kafka, ksql, kafka connect, postgresql, ELK Experience building data pipelines and lakes in AWS Data operations experience using tools such as Terraform, CloudFormation and/or Salt Where you’ll be working/Location: JumpCloud is committed to being Remote First, meaning that you are able to work remotely within the country noted in the Job Description. You must be located in and authorized to work in the country noted in the job description to be considered for this role. Please note: There is an expectation that our engineers participate in on-call shifts. You will be expected commit to being ready and able to respond during your assigned shift, so that alerts don't go unaddressed. Language: JumpCloud has teams in 15+ countries around the world and conducts our internal business in English. The interview and any additional screening process will take place primarily in English. To be considered for a role at JumpCloud, you will be required to speak and write in English fluently. Any additional language requirements will be included in the details of the job description. Why JumpCloud? If you thrive working in a fast, SaaS-based environment and you are passionate about solving challenging technical problems, we look forward to hearing from you! JumpCloud is an incredible place to share and grow your expertise! You’ll work with amazing talent across each department who are passionate about our mission. We’re out of the box thinkers, so your unique ideas and approaches for conceiving a product and/or feature will be welcome. You’ll have a voice in the organization as you work with a seasoned executive team, a supportive board and in a proven market that our customers are excited about. One of JumpCloud's three core values is to “Build Connections.” To us that means creating " human connection with each other regardless of our backgrounds, orientations, geographies, religions, languages, gender, race, etc. We care deeply about the people that we work with and want to see everyone succeed." - Rajat Bhargava, CEO Please submit your résumé and brief explanation about yourself and why you would be a good fit for JumpCloud. Please note JumpCloud is not accepting third party resumes at this time. JumpCloud is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. Scam Notice: Please be aware that there are individuals and organizations that may attempt to scam job seekers by offering fraudulent employment opportunities in the name of JumpCloud. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. Please note that JumpCloud will never ask for any personal account information, such as credit card details or bank account numbers, during the recruitment process. Additionally, JumpCloud will never send you a check for any equipment prior to employment. All communication related to interviews and offers from our recruiters and hiring managers will come from official company email addresses (@jumpcloud.com) and will never ask for any payment, fee to be paid or purchases to be made by the job seeker. If you are contacted by anyone claiming to represent JumpCloud and you are unsure of their authenticity, please do not provide any personal/financial information and contact us immediately at recruiting@jumpcloud.com with the subject line "Scam Notice" #BI-Remote

Posted 1 month ago

Apply

4.0 years

0 Lacs

Mumbai Metropolitan Region

Remote

All roles at JumpCloud are Remote unless otherwise specified in the Job Description. About JumpCloud JumpCloud® delivers a unified open directory platform that makes it easy to securely manage identities, devices, and access across your organization. With JumpCloud, IT teams and MSPs enable users to work securely from anywhere and manage their Windows, Apple, Linux, and Android devices from a single platform. JumpCloud is IT Simplified. About the Role We’re looking for a Software Engineer to join JumpCloud’s Data Engineering team. Data Engineering's Vision & Mission: Vision: Data to drive JumpCloud and our Customers. Mission: To put in place foundational technology and process to up-level the data capabilities of our Product and our Data Warehouse/Lakehouse. We are introducing an Event Based Architecture , developing and refining a data model that supports JumpCloud’s growth strategy and modernizing our Data Warehouse. A successful data engineer will exhibit an entrepreneurial spirit and enjoy tackling data engineering problems that most other people cannot solve, as well as shaping the future capabilities of JumpCloud’s data engineering, performance reporting and data governance. Come be a part of an exciting new team where you will be able to work on challenging projects, rich data sets, and develop valuable skills. This role involves taking full ownership of our core mongoDB and the supporting services. It will require managing, monitoring and optimizing mongoDB clusters. Responsibilities: Design, implement, and maintain scalable and reliable data pipelines f or ingesting, transforming, and loading data into and out of MongoDB Manage, monitor, and optimize MongoDB clusters for performance, availability, and security, including sharding, replication, and backup/recovery strategies Develop and deploy RESTful APIs and microservices that interact with MongoDB, enabling data access and manipulation for various applications Collaborate closely with software engineers, data scientists, and product managers to understand data requirements and translate them into technical solutions Implement data governance, data quality, and data security best practices for MongoDB environments Troubleshoot and resolve database-related issues promptly and efficiently Participate in code reviews and contribute to architectural discussions to ensure high-quality and scalable solutions Stay up-to-date with the latest trends and technologies in the NoSQL database space, particularly with MongoDB We’re looking for: 4-6 years of experience as a Data Engineer, Database Administrator, or similar role with a strong focus on MongoDB Proficient in designing, implementing, and managing MongoDB shared clusters and replica sets 3-6 years of experience in at least one of the following languages: Node.js (preferred), Go, Python, or Java. Experience developing and deploying microservices or APIs that interact with databases Solid understanding of database concepts (indexing, query optimization, data modeling, ACID properties for relational vs. BASE for NoSQL) Familiarity with cloud platforms (AWS, Azure, GCP) Experience with version control systems (e.g., Git) Excellent problem-solving, analytical, and communication skills Willingness to learn and embrace new technologies, languages, and frameworks (we will test your skills with a take home exercise) Comfortable with Linux or OSX as a desktop development environment Strong team player that wants to win together Strong communication skills Bonus points if you have: Experience with technologies like: kafka, ksql, kafka connect, postgresql, ELK Experience building data pipelines and lakes in AWS Data operations experience using tools such as Terraform, CloudFormation and/or Salt Where you’ll be working/Location: JumpCloud is committed to being Remote First, meaning that you are able to work remotely within the country noted in the Job Description. You must be located in and authorized to work in the country noted in the job description to be considered for this role. Please note: There is an expectation that our engineers participate in on-call shifts. You will be expected commit to being ready and able to respond during your assigned shift, so that alerts don't go unaddressed. Language: JumpCloud has teams in 15+ countries around the world and conducts our internal business in English. The interview and any additional screening process will take place primarily in English. To be considered for a role at JumpCloud, you will be required to speak and write in English fluently. Any additional language requirements will be included in the details of the job description. Why JumpCloud? If you thrive working in a fast, SaaS-based environment and you are passionate about solving challenging technical problems, we look forward to hearing from you! JumpCloud is an incredible place to share and grow your expertise! You’ll work with amazing talent across each department who are passionate about our mission. We’re out of the box thinkers, so your unique ideas and approaches for conceiving a product and/or feature will be welcome. You’ll have a voice in the organization as you work with a seasoned executive team, a supportive board and in a proven market that our customers are excited about. One of JumpCloud's three core values is to “Build Connections.” To us that means creating " human connection with each other regardless of our backgrounds, orientations, geographies, religions, languages, gender, race, etc. We care deeply about the people that we work with and want to see everyone succeed." - Rajat Bhargava, CEO Please submit your résumé and brief explanation about yourself and why you would be a good fit for JumpCloud. Please note JumpCloud is not accepting third party resumes at this time. JumpCloud is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. Scam Notice: Please be aware that there are individuals and organizations that may attempt to scam job seekers by offering fraudulent employment opportunities in the name of JumpCloud. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. Please note that JumpCloud will never ask for any personal account information, such as credit card details or bank account numbers, during the recruitment process. Additionally, JumpCloud will never send you a check for any equipment prior to employment. All communication related to interviews and offers from our recruiters and hiring managers will come from official company email addresses (@jumpcloud.com) and will never ask for any payment, fee to be paid or purchases to be made by the job seeker. If you are contacted by anyone claiming to represent JumpCloud and you are unsure of their authenticity, please do not provide any personal/financial information and contact us immediately at recruiting@jumpcloud.com with the subject line "Scam Notice" #BI-Remote

Posted 1 month ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

Remote

All roles at JumpCloud are Remote unless otherwise specified in the Job Description. About JumpCloud JumpCloud® delivers a unified open directory platform that makes it easy to securely manage identities, devices, and access across your organization. With JumpCloud, IT teams and MSPs enable users to work securely from anywhere and manage their Windows, Apple, Linux, and Android devices from a single platform. JumpCloud is IT Simplified. About the Role We’re looking for a Software Engineer to join JumpCloud’s Data Engineering team. Data Engineering's Vision & Mission: Vision: Data to drive JumpCloud and our Customers. Mission: To put in place foundational technology and process to up-level the data capabilities of our Product and our Data Warehouse/Lakehouse. We are introducing an Event Based Architecture , developing and refining a data model that supports JumpCloud’s growth strategy and modernizing our Data Warehouse. A successful data engineer will exhibit an entrepreneurial spirit and enjoy tackling data engineering problems that most other people cannot solve, as well as shaping the future capabilities of JumpCloud’s data engineering, performance reporting and data governance. Come be a part of an exciting new team where you will be able to work on challenging projects, rich data sets, and develop valuable skills. This role involves taking full ownership of our core mongoDB and the supporting services. It will require managing, monitoring and optimizing mongoDB clusters. Responsibilities: Design, implement, and maintain scalable and reliable data pipelines f or ingesting, transforming, and loading data into and out of MongoDB Manage, monitor, and optimize MongoDB clusters for performance, availability, and security, including sharding, replication, and backup/recovery strategies Develop and deploy RESTful APIs and microservices that interact with MongoDB, enabling data access and manipulation for various applications Collaborate closely with software engineers, data scientists, and product managers to understand data requirements and translate them into technical solutions Implement data governance, data quality, and data security best practices for MongoDB environments Troubleshoot and resolve database-related issues promptly and efficiently Participate in code reviews and contribute to architectural discussions to ensure high-quality and scalable solutions Stay up-to-date with the latest trends and technologies in the NoSQL database space, particularly with MongoDB We’re looking for: 4-6 years of experience as a Data Engineer, Database Administrator, or similar role with a strong focus on MongoDB Proficient in designing, implementing, and managing MongoDB shared clusters and replica sets 3-6 years of experience in at least one of the following languages: Node.js (preferred), Go, Python, or Java. Experience developing and deploying microservices or APIs that interact with databases Solid understanding of database concepts (indexing, query optimization, data modeling, ACID properties for relational vs. BASE for NoSQL) Familiarity with cloud platforms (AWS, Azure, GCP) Experience with version control systems (e.g., Git) Excellent problem-solving, analytical, and communication skills Willingness to learn and embrace new technologies, languages, and frameworks (we will test your skills with a take home exercise) Comfortable with Linux or OSX as a desktop development environment Strong team player that wants to win together Strong communication skills Bonus points if you have: Experience with technologies like: kafka, ksql, kafka connect, postgresql, ELK Experience building data pipelines and lakes in AWS Data operations experience using tools such as Terraform, CloudFormation and/or Salt Where you’ll be working/Location: JumpCloud is committed to being Remote First, meaning that you are able to work remotely within the country noted in the Job Description. You must be located in and authorized to work in the country noted in the job description to be considered for this role. Please note: There is an expectation that our engineers participate in on-call shifts. You will be expected commit to being ready and able to respond during your assigned shift, so that alerts don't go unaddressed. Language: JumpCloud has teams in 15+ countries around the world and conducts our internal business in English. The interview and any additional screening process will take place primarily in English. To be considered for a role at JumpCloud, you will be required to speak and write in English fluently. Any additional language requirements will be included in the details of the job description. Why JumpCloud? If you thrive working in a fast, SaaS-based environment and you are passionate about solving challenging technical problems, we look forward to hearing from you! JumpCloud is an incredible place to share and grow your expertise! You’ll work with amazing talent across each department who are passionate about our mission. We’re out of the box thinkers, so your unique ideas and approaches for conceiving a product and/or feature will be welcome. You’ll have a voice in the organization as you work with a seasoned executive team, a supportive board and in a proven market that our customers are excited about. One of JumpCloud's three core values is to “Build Connections.” To us that means creating " human connection with each other regardless of our backgrounds, orientations, geographies, religions, languages, gender, race, etc. We care deeply about the people that we work with and want to see everyone succeed." - Rajat Bhargava, CEO Please submit your résumé and brief explanation about yourself and why you would be a good fit for JumpCloud. Please note JumpCloud is not accepting third party resumes at this time. JumpCloud is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. Scam Notice: Please be aware that there are individuals and organizations that may attempt to scam job seekers by offering fraudulent employment opportunities in the name of JumpCloud. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. Please note that JumpCloud will never ask for any personal account information, such as credit card details or bank account numbers, during the recruitment process. Additionally, JumpCloud will never send you a check for any equipment prior to employment. All communication related to interviews and offers from our recruiters and hiring managers will come from official company email addresses (@jumpcloud.com) and will never ask for any payment, fee to be paid or purchases to be made by the job seeker. If you are contacted by anyone claiming to represent JumpCloud and you are unsure of their authenticity, please do not provide any personal/financial information and contact us immediately at recruiting@jumpcloud.com with the subject line "Scam Notice" #BI-Remote

Posted 1 month ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Senior Software Developer – Backend (Node.js) Exp:8+ Location: Chennai Notice Period: Immediate to 30days Skills:Node.js ,TypeScript, Express, RESTful API design, and asynchronous patterns • Build services in Node.js / TypeScript using Express • Translate product requirements into scalable, fault-tolerant designs • Lead technical design for new microservices and core APIs • Write clean, testable code with unit and integration tests (Jest, Playwright) • Model relational data in MySQL and PostgreSQL and optimize queries/indexes • Implement caching, sharding, or read replicas as data volumes grow • Containerize services with Docker and work with GitLab CI or Github Actions within established CI/CD pipelines • Perform thoughtful code reviews, drive adoption of best practices Must-Have Qualifications • Fluency in English, both written and spoken, for daily collaboration with distributed teams • 8+ years professional software engineering experience, with 3 + years focused on Node.js back-end development • Deep knowledge of TypeScript, Express, RESTful API design, and asynchronous patterns (Promises, async/await, streams) • Strong SQL skills and hands-on experience tuning MySQL or PostgreSQL for high concurrency • Production experience with Docker (build, compose, multi-stage images) and CI/CD pipelines (GitLab CI, GitHub Actions, or similar) • Proficiency with Git workflows and code review culture • Experience implementing caching strategies (e.g., Redis) • Passion for automated testing, clean architecture, and scalable design • Understanding of OAuth 2.0, JWT, and secure coding practices Nice-to-Have • Experience with TypeORM, NestJS or Fastify • Experience exposing or consuming GraphQL

Posted 1 month ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About SecuvyAI SecuvyAI is a cutting-edge SaaS company specializing in responsible data and AI governance. Our platform leverages machine learning and GenAI to automate the protection of sensitive data, privacy, and compliance across structured and unstructured sources in the cloud and on-premises. We're serving Fortune 500 clients and hyper-growth technology firms who depend on us to manage billions of records, unlock actionable business insights, and confidently ensure compliance with global regulations. Position Overview We are seeking a highly skilled SQL Developer/Architect with deep expertise in designing, developing, and optimizing large-scale, mission-critical PostgreSQL databases along with Typescript or ReactJS. You’ll join a team dedicated to responsible data discovery, governance, and privacy at scale. This is a unique opportunity to work with massive datasets and cutting-edge data platforms powering the next generation of AI and compliance solutions. Key Responsibilities Architect, design, and implement highly scalable and available PostgreSQL database solutions to support SaaS product requirements. Develop, optimize, and maintain complex SQL queries, stored procedures, views, triggers, and functions for high-volume production environments. Model complex data structures and create efficient data pipelines for structured and unstructured data at scale. Tune database performance, including indexing, partitioning, sharding, and query optimization, for petabyte-scale workloads. Ensure database reliability, security, backup, disaster recovery, and regulatory compliance across multi-tenant cloud environments. Collaborate closely with Engineering, AI/ML, and Product teams to translate business needs into robust database architectures. Document database standards, schemas, and processes; conduct thorough code reviews and performance audits. Stay current with advancements in PostgreSQL and the broader database ecosystem; recommend and implement best practices. Required Qualifications Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. 5+ years of hands-on experience with PostgreSQL in large-scale, high-availability, and mission-critical applications. Demonstrated ability to model, design, maintain, and tune complex relational databases managing billions of records. Advanced SQL skills, including PL/pgSQL, window functions, and performance tuning. Deep understanding of database internals, high-availability architectures, security, and compliance requirements for sensitive data. Familiarity with cloud-based deployments (AWS, GCP, or Azure) and automation of database operations. Experience working with data privacy, compliance, or regulatory frameworks (GDPR, CCPA, HIPAA, etc.), a strong plus. Excellent written and verbal communication skills. Preferred Qualifications Experience with emerging data catalog, data lineage, or AI governance platforms. Exposure to large-scale unstructured data management and search technologies. SecuvyAI Offers An opportunity to work at the intersection of AI, Privacy, and Data Governance with a talented, mission-driven team. A high-impact role building transformative solutions for top enterprises and global brands. Flexible work arrangements and competitive compensation. This job description is synthesized using general SQL/PostgreSQL developer/architect requirements from leading sources as well as unique company attributes for SecuvyAI from the provided description.

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Dive in and do the best work of your career at DigitalOcean. Journey alongside a strong community of top talent who are relentless in their drive to build the simplest scalable cloud. If you have a growth mindset, naturally like to think big and bold, and are energized by the fast-paced environment of a true industry disruptor, you’ll find your place here. We value winning together—while learning, having fun, and making a profound difference for the dreamers and builders in the world. At DigitalOcean, we're not just simplifying cloud computing - we're revolutionizing it. We serve the developer community and the businesses they build with a relentless pursuit of simplicity. With our customers at the heart of what we do - and powered by a diverse culture that values boldness, speed, simplicity, ownership, and a growth mindset - we are committed to building truly useful products. Come swim with us! Position Overview We are looking for a Software Engineer who is passionate about writing clean, maintainable code and eager to contribute to the success of our platform.As a Software Engineer at DigitalOcean, you will join a dynamic team dedicated to revolutionizing cloud computing.We’re looking for an experienced Software Engineer II to join our growing engineering team. You’ll work on building and maintaining features that directly impact our users, from creating scalable backend systems to improving performance for thousands of customers. What You’ll Do Design, develop, and maintain backend systems and services that power our platform. Collaborate with cross-functional teams to design and implement new features, ensuring the best possible developer experience for our users. Troubleshoot complex technical problems and find efficient solutions in a timely manner. Write high-quality, testable code, and contribute to code reviews to maintain high standards of development practices. Participate in architecture discussions and contribute to the direction of the product’s technical vision. Continuously improve the reliability, scalability, and performance of the platform. Participate in rotating on-call support, providing assistance with production systems when necessary. Mentor and guide junior engineers, helping them grow technically and professionally. What You’ll Add To DigitalOcean A degree in Computer Science, Engineering, or a related field, or equivalent experience. Proficiency in at least one modern programming language (e.g., Go, Python, Ruby, Java, etc.), with a strong understanding of data structures, algorithms, and software design principles. Hands-on experience with cloud computing platforms and infrastructure-as-code practices. Strong knowledge of RESTful API design and web services architecture. Demonstrated ability to build scalable and reliable systems that operate in production at scale. Excellent written and verbal communication skills to effectively collaborate with teams. A deep understanding of testing principles and the ability to write automated tests that ensure the quality of code. Familiarity with agile methodologies, including sprint planning, continuous integration, and delivery. Knowledge of advanced database concepts such as sharding, indexing, and performance tuning. Exposure to monitoring and observability tools such as Prometheus, Grafana, or ELK Stack. Experience with infrastructure-as-code tools such as Terraform or CloudFormation. Familiarity with Kubernetes, Docker, and other containerization/orchestration tools. Why You’ll Like Working For DigitalOcean We innovate with purpose. You’ll be a part of a cutting-edge technology company with an upward trajectory, who are proud to simplify cloud and AI so builders can spend more time creating software that changes the world. As a member of the team, you will be a Shark who thinks big, bold, and scrappy, like an owner with a bias for action and a powerful sense of responsibility for customers, products, employees, and decisions. We prioritize career development. At DO, you’ll do the best work of your career. You will work with some of the smartest and most interesting people in the industry. We are a high-performance organization that will always challenge you to think big. Our organizational development team will provide you with resources to ensure you keep growing. We provide employees with reimbursement for relevant conferences, training, and education. All employees have access to LinkedIn Learning's 10,000+ courses to support their continued growth and development. We care about your well-being. Regardless of your location, we will provide you with a competitive array of benefits to support you from our Employee Assistance Program to Local Employee Meetups to flexible time off policy, to name a few. While the philosophy around our benefits is the same worldwide, specific benefits may vary based on local regulations and preferences. We reward our employees. The salary range for this position is based on market data, relevant years of experience, and skills. You may qualify for a bonus in addition to base salary; bonus amounts are determined based on company and individual performance. We also provide equity compensation to eligible employees, including equity grants upon hire and the option to participate in our Employee Stock Purchase Program. We value diversity and inclusion. We are an equal-opportunity employer, and recognize that diversity of thought and background builds stronger teams and products to serve our customers. We approach diversity and inclusion seriously and thoughtfully. We do not discriminate on the basis of race, religion, color, ancestry, national origin, caste, sex, sexual orientation, gender, gender identity or expression, age, disability, medical condition, pregnancy, genetic makeup, marital status, or military service. This job is located in Hyderabad, India

Posted 1 month ago

Apply

7.0 years

0 Lacs

India

Remote

Job Title: Remote MongoDB Developer Location: Remote / Hybrid (Preferred regions: EMEA, USA, UK, Japan) Experience: 2–7 years Type: Full-time / Contract Your Role We are hiring a skilled MongoDB Developer to join our remote engineering team. In this role, you will be responsible for designing, managing, and optimizing MongoDB-based database systems that support high-volume, mission-critical applications. You’ll work closely with backend and DevOps teams to ensure data integrity, performance, and scalability across projects. Key Responsibilities Design and implement MongoDB databases that are scalable and high-performing Develop efficient queries, indexes, and aggregation pipelines Maintain and optimize data models for new and existing applications Collaborate with developers to integrate MongoDB into backend services and APIs Monitor database performance, usage, and security Ensure database backup, replication, and disaster recovery procedures Participate in schema design, migration strategies, and documentation Core Skills Required Strong experience with MongoDB and document-oriented data modeling Proficiency in writing optimized queries and aggregation pipelines Familiarity with MongoDB tools (Compass, Atlas, Ops Manager) Experience integrating MongoDB with backend code (Node.js, Python, etc.) Knowledge of indexing, sharding, and replication techniques Understanding of JSON, BSON, and NoSQL database principles Experience in version control systems like Git Skills We Value Exposure to cloud-based MongoDB (MongoDB Atlas) Familiarity with other NoSQL or relational databases (Redis, PostgreSQL, etc.) Experience in schema migrations and performance tuning Working knowledge of microservices or serverless architectures Ability to interpret complex datasets and support analytics workflows Experience working in Agile teams and DevOps environments What You’ll Get Flexible remote work setup Work with innovative global clients and dynamic projects Access to a supportive community of engineers, tech leads, and mentors Opportunities to grow your freelance/remote portfolio Continuous skill-building and learning resources

Posted 1 month ago

Apply

7.0 years

0 Lacs

Greater Kolkata Area

On-site

Who are we looking for? We are looking for 7+ years of administrator experience in MongoDB/Cassandra/Snowflake Databases. This role is focused on production support, ensuring database performance, availability, and reliability across multiple clusters. The ideal candidate will be responsible for ensuring the availability, performance, and security of our NoSQL database environment. You will provide 24/7 production support, troubleshoot issues, monitor system health, optimize performance, and collaborate with cross-functional teams to maintain a reliable and efficient Snowflake platform. Technical Skills Proven experience as a MongoDB/Cassandra/Snowflake Databases Administrator or similar role in production support environments. 7+ years of hands-on experience as a MongoDB DBA supporting production environments. Strong understanding of MongoDB architecture, including replica sets, sharding, and aggregation framework. Proficiency in writing and optimizing complex MongoDB queries and indexes. Experience with backup and recovery solutions (e.g., mongodump, mongorestore, Ops Manager). Solid knowledge of Linux/Unix systems and scripting (Shell, Python, or similar). Experience with monitoring tools like Prometheus, Grafana, DataStax OpsCenter, or similar. Understanding of distributed systems and high-availability concepts. Proficiency in troubleshooting cluster issues, performance tuning, and capacity planning. In-depth understanding of data management (e.g. permissions, recovery, security and monitoring Understanding of ETL/ELT tools and data integration patterns. Strong troubleshooting and problem-solving skills. Excellent communication and collaboration abilities. Ability to work in a 24/7 support rotation and handle urgent production issues. Strong understanding of relational database concepts. Experience with database design, modeling, and optimization is good to have Familiarity with data security is the best practice and backup : Support & Incident Management : Provide 24/7 support for MongoDB environments, including on-call rotation. Monitor system health and respond to s, incidents, and performance degradation issues. Troubleshoot and resolve production database issues in a timely manner. Database Administration Install, configure, and upgrade MongoDB clusters in on-prem or cloud environments. Perform routine maintenance including backups, restores, indexing, and data migration. Monitor and manage replica sets, sharding, and cluster Tuning & Optimization : Analyze query and indexing strategies to improve performance. Tune MongoDB server parameters and JVM settings where applicable. Monitor and optimize disk I/O, memory usage, and CPU utilization . Security & Compliance Implement and manage access control, roles, and authentication mechanisms (LDAP, x.509, SCRAM). Ensure encryption, auditing, and compliance with data governance and security & Monitoring : Create and maintain scripts for automation of routine tasks (e.g., backups, health and checks Set up and maintain monitoring tools (e.g., MongoDB Ops Manager, Prometheus/Grafana, MMS). Documentation & Collaboration Maintain documentation on architecture, configurations, procedures, and incident reports. Work closely with application and infrastructure teams to support new releases and : Experience with MongoDB Atlas and other cloud-managed MongoDB services. MongoDB certification (MongoDB Certified DBA Experience with automation tools like Ansible, Terraform, or Puppet. Understanding of DevOps practices and CI/CD integration. Familiarity with other NoSQL and RDBMS technologies is a plus. Education qualification : Any degree (ref:hirist.tech)

Posted 1 month ago

Apply

10.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Company Description Celcius Logistics Solutions Pvt Ltd is India's first and only asset-light cold chain marketplace, offering a web and app-based SaaS platform that brings the entire cold chain network online. Our platform enables seamless connections between transporters and manufacturers of perishable products, serving key sectors like dairy, pharmaceuticals, fresh agro produce, and frozen products. We provide comprehensive network monitoring and booking capabilities for reefer vehicle loads and cold storage space across India. With over 3,500 registered reefer trucks and 100+ cold storage facilities, we are revolutionizing the cold chain industry in India. Role Description We are looking for a Senior Database Administrator (DBA) to lead the design, implementation, and management of high-performance, highly available database systems. This role is critical to support real-time data ingestion, processing, and storage for our vehicle telemetry platforms. You will be responsible for ensuring 24/7 database availability, optimizing performance for millions of transactions per day, and enabling scalability for future growth. Key Responsibilities: Design and implement fault-tolerant, highly available database architectures. Manage clustering, replication, and automated failover systems. Ensure zero-downtime during updates, scaling, and maintenance. Monitor and optimize database performance and query efficiency. Tune database configurations for peak performance under load. Implement caching and indexing strategies. Design data models for real-time telemetry ingestion. Implement partitioning, sharding, and retention policies. Ensure data consistency, archival, and lifecycle management. Set up and enforce database access controls and encryption. Perform regular security audits and comply with data regulations. Implement backup, disaster recovery, and restore procedures. Qualifications 10+ years as a hands-on DBA managing production databases Experience handling high-volume, real-time data (ideally telemetry or IoT) Familiarity with microservices-based architectures Proven track record in implementing high-availability and disaster recovery solutions Advanced knowledge of enterprise RDBMS (Oracle, PostgreSQL, MongoDB, etc.) Experience with time-series and geospatial data Hands-on experience with clustering, sharding, and replication Expertise in performance tuning and query optimization Proficiency in database automation and monitoring tools Strong scripting skills (Python, Shell, etc.)

Posted 1 month ago

Apply

8.0 years

9 - 10 Lacs

Hyderābād

On-site

We are looking for Senior MongoDB Developer to join our technology team at Clarivate. The successful candidate will be responsible for the design, implementation, and optimization of MongoDB databases to ensure high performance and availability, both on-premises and in the cloud. About You – experience, education, skills and accomplishments At least 8 years of experience in NoSQL databases management such MongoDB (Primary), Azure CosmosDB and AWS DocumentDB, performance tuning, and architecture design. Bachelor’s degree in computer science, Information Technology, or a related field, or equivalent experience Proven work experience as a MongoDB/Cosmos/DocumentDB Performance, Data Modelling (Document), and Architecture Expert. Experience with database architecture and design to support application development projects. Knowledge of MongoDB/Cosmos/DocumentDB features such as Sharding and Replication. (IMP) Proficient understanding of code versioning tools such as Git. Experience with both on-premises and cloud-based database hosting. It would be great if you also had… In-depth understanding of database architectures and data modelling principles Good knowledge of modern DevOps practices and the adoption in database development Experience in dealing with Data migrations from relational to NoSQL. What will you be doing in this role? Design and implement database schemas that represent and support business processes. Develop and maintain database architecture that supports scalability and high availability, both on-premises and in the cloud. Optimize database systems for performance efficiency. Monitor database performance and adjust parameters for optimal database speed. Implement effective security protocols, data protection measures, and storage solutions. Run diagnostic tests and quality assurance procedures. Document processes, systems, and their associated configurations. Working with DBAs and development teams to develop and maintain procedures for database backup and recovery. About the team: A Database Management Service team covers a wide range of tasks essential for maintaining, optimizing, and securing databases. One end of spectrum consists of infrastructure DBA’s and the other end being DB specialists (SME) with a Devops mindsight and will partner heavily with the DBAs, Application and devops teams. We've a philosophy and open mindset to learning new technology that enhances the business objectives and we encourage and support growth across the team profile to become deeply knowledgeable in other Databases technologies. Hours of work This is a full-time opportunity with Clarivate. 9 hours per day including lunch break At Clarivate, we are committed to providing equal employment opportunities for all qualified persons with respect to hiring, compensation, promotion, training, and other terms, conditions, and privileges of employment. We comply with applicable laws and regulations governing non-discrimination in all locations.

Posted 1 month ago

Apply

6.0 years

15 - 18 Lacs

Chennai

On-site

Hi, Greetings from EWarriors Tech Solutions. We are hiring for the following position: Role: Back End Developer (NodeJS Developer) Location: Chennai (Onsite) Experience: 6 – 8 Years Timing: 02.00 PM – 11.00 PM IST Notice: Immediate Joiners / Less than 15 Days Responsibilities: · Build services in Node.js/TypeScript using Express · Translate product requirements into scalable, fault-tolerant designs · Lead technical design for new microservices and core APIs · Write clean, testable code with unit and integration tests (Jest, Playwright) · Model relational data in MySQL and PostgreSQL and optimize queries/indexes · Implement caching, sharding, or read replicas as data volumes grow · Containerize services with Docker and work with GitLab CI or Github Actions within established CI/CD pipelines · Perform thoughtful code reviews, drive adoption of best practices Must-Have Qualifications: Fluency in English, both written and spoken, for daily collaboration with distributed teams 8+ years professional software engineering experience, with 3 + years focused on Node.js back-end development Deep knowledge of TypeScript, Express, RESTful API design, and asynchronous patterns (Promises, async/await, streams) Strong SQL skills and hands-on experience tuning MySQL or PostgreSQL for high concurrency Production experience with Docker (build, compose, multi-stage images) and CI/CD pipelines (GitLab CI, GitHub Actions, or similar) Proficiency with Git workflows and code review culture Experience implementing caching strategies (e.g., Redis) Passion for automated testing, clean architecture, and scalable design Understanding of OAuth 2.0, JWT, and secure coding practices Nice-to-Have Experience with TypeORM, NestJS or Fastify Experience exposing or consuming GraphQL If interested, kindly share the resumes to *bharathi@ewarriorstechsolutions.com* or contact @8015568995. Job Type: Full-time Pay: ₹1,500,000.00 - ₹1,800,000.00 per year Schedule: Monday to Friday UK shift Work Location: In person

Posted 1 month ago

Apply

7.0 years

0 Lacs

India

Remote

Role: Neo4j Engineer Overall IT Experience: 7+ years Relevant experience: (Graph Databases: 4+ years, Neo4j: 2+ years) Location: Remote Company Description Bluetick Consultants is a technology-driven firm that supports hiring remote developers, building technology products, and enabling end-to-end digital transformation. With previous experience in top technology companies such as Amazon, Microsoft, and Craftsvilla, we understand the needs of our clients and provide customized solutions. Our team has expertise in emerging technologies, backend and frontend development, cloud development, and mobile technologies. We prioritize staying up-to-date with the latest technological advances to create a long-term impact and grow together with our clients. Key Responsibilities • Graph Database Architecture: Design and implement Neo4j graph database schemas optimized for fund administration data relationships and AI-powered queries • Knowledge Graph Development: Build comprehensive knowledge graphs connecting entities like funds, investors, companies, transactions, legal documents, and market data • Graph-AI Integration: Integrate Neo4j with AI/ML pipelines, particularly for enhanced RAG (Retrieval-Augmented Generation) systems and semantic search capabilities • Complex Relationship Modeling: Model intricate relationships between Limited Partners, General Partners, fund structures, investment flows, and regulatory requirements • Query Optimization: Develop high-performance Cypher queries for real-time analytics, relationship discovery, and pattern recognition • Data Pipeline Integration: Build ETL processes to populate and maintain graph databases from various data sources including FundPanel.io, legal documents, and external market data using domain specific ontologies • Graph Analytics: Implement graph algorithms for fraud detection, risk assessment, relationship scoring, and investment opportunity identification • Performance Tuning: Optimize graph database performance for concurrent users and complex analytical queries • Documentation & Standards: Establish graph modelling standards, query optimization guidelines, and comprehensive technical documentation Key Use Cases You'll Enable • Semantic Search Enhancement: Create knowledge graphs that improve AI search accuracy by understanding entity relationships and context • Investment Network Analysis: Map complex relationships between investors, funds, portfolio companies, and market segments • Compliance Graph Modelling: Model regulatory relationships and fund terms to support automated auditing and compliance validation • Customer Relationship Intelligence: Build relationship graphs for customer relations monitoring and expansion opportunity identification • Predictive Modelling Support: Provide graph-based features for investment prediction and risk assessment models • Document Relationship Mapping: Connect legal documents, contracts, and agreements through entity and relationship extraction Required Qualifications • Bachelor's degree in Computer Science, Data Engineering, or related field • 7+ years of overall IT Experience • 4+ years of experience with graph databases, with 2+ years specifically in Neo4j • Strong background in data modelling, particularly for complex relationship structures • Experience with financial services data and regulatory requirements preferred • Proven experience integrating graph databases with AI/ML systems • Understanding of knowledge graph concepts and semantic technologies • Experience with high-volume, production-scale graph database implementations Technology Skills • Graph Databases: Neo4j (primary), Cypher query language, APOC procedures, Neo4j Graph Data Science library • Programming: Python, Java, or Scala for graph data processing and integration • AI Integration: Experience with graph-enhanced RAG systems, vector embeddings in graph context, GraphRAG implementations • Data Processing: ETL pipelines, data transformation, real-time data streaming (Kafka, Apache Spark) • Cloud Platforms: Neo4j Aura, Azure integration, containerized deployments • APIs: Neo4j drivers, REST APIs, GraphQL integration • Analytics: Graph algorithms (PageRank, community detection, shortest path, centrality measures) • Monitoring: Neo4j monitoring tools, performance profiling, query optimization • Integration: Elasticsearch integration, vector database connections, multi-modal data handling Specific Technical Requirements • Knowledge Graph Construction: Entity resolution, relationship extraction, ontology modelling • Cypher Expertise: Advanced Cypher queries, stored procedures, custom functions • Scalability: Clustering, sharding, horizontal scaling strategies • Security: Graph-level security, role-based access control, data encryption • Version Control: Graph schema versioning, migration strategies • Backup & Recovery: Graph database backup strategies, disaster recovery planning Industry Context Understanding • Fund Administration: Understanding of fund structures, capital calls, distributions, and investor relationships • Financial Compliance: Knowledge of regulatory requirements and audit trails in financial services • Investment Workflows: Understanding of due diligence processes, portfolio management, and investor reporting • Legal Document Structures: Familiarity with LPA documents, subscription agreements, and fund formation documents Collaboration Requirements • AI/ML Team: Work closely with GenAI engineers to optimize graph-based AI applications • Data Architecture Team: Collaborate on overall data architecture and integration strategies • Backend Developers: Integrate graph databases with application APIs and microservices • DevOps Team: Ensure proper deployment, monitoring, and maintenance of graph database infrastructure • Business Stakeholders: Translate business requirements into effective graph models and queries Performance Expectations • Query Performance: Ensure sub-second response times for standard relationship queries • Scalability: Support 100k+ users with concurrent access to graph data • Accuracy: Maintain data consistency and relationship integrity across complex fund structures • Availability: Ensure 99.9% uptime for critical graph database services • Integration Efficiency: Seamless integration with existing FundPanel.io systems and new AI services This role offers the opportunity to work at the intersection of advanced graph technology and artificial intelligence, creating innovative solutions that will transform how fund administrators understand and leverage their data relationships.

Posted 1 month ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Velotio Technologies is a product engineering company working with innovative startups and enterprises. We are a certified and recognized as one of the best companies to work for in India. We have provided full-stack product development for 110+ startups across the globe building products in the cloud-native, data engineering, B2B SaaS, IoT & Machine Learning space. Our team of 400+ elite software engineers solves hard technical problems while transforming customer ideas into successful products. About the Role: We are seeking an experienced Infrastructure Site Reliability Engineer (SRE) to join our team. This role is critical for ensuring the reliability, scalability, and performance of our infrastructure, particularly in managing and optimizing high-throughput data systems. You will work closely with engineering teams to design, implement, and maintain robust infrastructure solutions that meet our growing needs. As an Infrastructure SRE, you will be at the forefront of managing and optimizing our Kafka and OpenSearch clusters, AWS services, and multi-cloud environments. Your expertise will be key in ensuring the smooth operation of our infrastructure, enabling us to deliver high-performance and reliable services. This is an exciting opportunity to contribute to a dynamic team that is shaping the future of data observability and orchestration pipelines. Requirements Responsibilities- Kafka Management: Set up, manage, and scale Kafka clusters, including implementing and optimizing Kafka Streams and Connect for seamless data integration. Fine-tune Kafka brokers and optimize producer/consumer configurations to ensure peak performance OpenSearch Expertise: Configure and manage OpenSearch clusters, optimizing indexing strategies and query performance. Ensure high availability and fault tolerance through effective data replication and sharding. Set up monitoring and alerting systems to track cluster health AWS Services Proficiency: Manage AWS RDS instances, including provisioning, configuration, and scaling. Optimize database performance and ensure robust backup and recovery strategies. Deploy, manage, and scale Kubernetes clusters on AWS EKS, configuring networking and security policies, and integrating EKS with CI/CD pipelines for automated deployment Multi-Cloud Environment Management: Design and manage infrastructure across multiple cloud providers, ensuring seamless cloud networking and security. Implement disaster recovery strategies and optimize costs in a multi-cloud setup Linux Administration: Optimize Linux server performance, manage system resources, and automate processes using shell scripting. Apply best practices for security hardening and troubleshoot Linux-related issues effectively CI/CD Automation: Design and manage CI/CD pipelines using tools like Jenkins, GitLab CI, or CircleCI, and ArgoCD. Automate deployment processes, integrate with version control systems, and implement advanced deployment strategies like blue-green deployments, canary releases, and rolling updates. Ensure security and compliance within CI/CD processes Qualification- Bachelor's, Master's, or Doctorate in Computer Science or a related field Deep knowledge of Kafka, with hands-on experience in cluster setup, management, and performance tuning Expertise in OpenSearch cluster management, indexing, query optimization, and monitoring Proficiency with AWS services, particularly RDS and EKS, including experience in database management, performance tuning, and Kubernetes deployment Experience in managing multi-cloud environments, with a strong understanding of cloud networking, security, and cost optimization strategies Strong background in Linux administration, including system performance tuning, shell scripting, and security hardening Proficiency with CI/CD automation tools and best practices, with a focus on secure and compliant pipeline management Strong analytical and problem-solving skills, essential for troubleshooting complex technical challenges Benefits Our Culture : We have an autonomous and empowered work culture encouraging individuals to take ownership and grow quickly Flat hierarchy with fast decision making and a startup-oriented "get things done" culture A strong, fun, and positive environment with regular celebrations of our success. We pride ourselves in creating an inclusive, diverse, and authentic environment At Velotio, we embrace diversity. Inclusion is a priority for us, and we are eager to foster an environment where everyone feels valued. We welcome applications regardless of ethnicity or cultural background, age, gender, nationality, religion, disability or sexual orientation.

Posted 1 month ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Senior Software Developer – Backend (Node.js) Exp:8+ Location: Chennai Notice Period: Immediate to 30days Skills:Node.js ,TypeScript, Express, RESTful API design, and asynchronous patterns • Build services in Node.js / TypeScript using Express • Translate product requirements into scalable, fault-tolerant designs • Lead technical design for new microservices and core APIs • Write clean, testable code with unit and integration tests (Jest, Playwright) • Model relational data in MySQL and PostgreSQL and optimize queries/indexes • Implement caching, sharding, or read replicas as data volumes grow • Containerize services with Docker and work with GitLab CI or Github Actions within established CI/CD pipelines • Perform thoughtful code reviews, drive adoption of best practices Must-Have Qualifications • Fluency in English, both written and spoken, for daily collaboration with distributed teams • 8+ years professional software engineering experience, with 3 + years focused on Node.js back-end development • Deep knowledge of TypeScript, Express, RESTful API design, and asynchronous patterns (Promises, async/await, streams) • Strong SQL skills and hands-on experience tuning MySQL or PostgreSQL for high concurrency • Production experience with Docker (build, compose, multi-stage images) and CI/CD pipelines (GitLab CI, GitHub Actions, or similar) • Proficiency with Git workflows and code review culture • Experience implementing caching strategies (e.g., Redis) • Passion for automated testing, clean architecture, and scalable design • Understanding of OAuth 2.0, JWT, and secure coding practices Nice-to-Have • Experience with TypeORM, NestJS or Fastify • Experience exposing or consuming GraphQL

Posted 1 month ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About NationsBenefits: At NationsBenefits, we are leading the transformation of the insurance industry by developing innovative benefits management solutions. We focus on modernizing complex back-office systems to create scalable, secure, and high-performing platforms that streamline operations for our clients. As part of our strategic growth, we are focused on platform modernization — transitioning legacy systems to modern, cloud-native architectures that support the scalability, reliability, and high performance of core back- office functions in the insurance domain. We are seeking a highly skilled and experienced Sr. SǪL Developer with 5–7 years of hands- on expertise in performance tuning, optimization of complex SǪL queries and stored procedures, and advanced database architecture. The ideal candidate will play a key role in enhancing the performance, scalability, and reliability of our data systems by leveraging advanced database optimization techniques and collaborating with cross-functional teams. Key Responsibilities: Design, develop, and optimize complex SǪL queries and stored procedures for high performance and scalability. Conduct performance tuning and troubleshoot slow-running queries using profiling and monitoring tools. Analyze and improve database architecture, including indexing, partitioning, and sharding strategies. Establish and enforce best practices for SǪL development, data retrieval, and database design. Work closely with data architects, backend engineers, and DevOps to ensure optimal database reliability and performance. Monitor and evaluate database performance metrics, proactively identifying and addressing potential issues. Create and maintain technical documentation for optimized queries, stored procedures, and overall database design. Participate in code reviews and performance assessments to ensure SǪL code quality and maintainability. Required Skills G Ǫualifications: 5–7 years of strong experience in SǪL development and query optimization. Expertise in tuning complex queries and stored procedures for large-scale datasets. Hands-on experience with tools like SǪL Profiler, Extended Events, Ǫuery Store, or similar performance monitoring utilities. Deep knowledge of indexing strategies, data partitioning, and sharding techniques. Strong understanding of execution plans, index types (clustered/non-clustered), and join operations. Proficient in relational data modeling and normalization principles. Experience with RDBMS platforms such as Microsoft SǪL Server, PostgreSǪL, MySǪL, or Oracle. Capability to analyze database metrics and build custom performance dashboards. Familiarity with a programming language such as C# or .NET is a plus. Exposure to Python or Shell scripting for automation and data processing is preferred. Experience working with high-throughput transactional systems or large-scale data warehouses is advantageous. Excellent problem-solving and analytical skills. Strong communication and documentation abilities. Ability to work independently and collaboratively across cross-functional teams. Bachelor's degree or higher in Computer Science, Information Technology, or a related field.

Posted 1 month ago

Apply

4.0 years

3 - 6 Lacs

Hyderābād

On-site

Join one of the nation’s leading and most impactful health care performance improvement companies. Over the years, Health Catalyst has achieved and documented clinical, operational, and financial improvements for many of the nation’s leading healthcare organizations. We are also increasingly serving international markets. Our mission is to be the catalyst for massive, measurable, data-informed healthcare improvement through: Data: integrate data in a flexible, open & scalable platform to power healthcare’s digital transformation Analytics: deliver analytic applications & services that generate insight on how to measurably improve Expertise: provide clinical, financial & operational experts who enable & accelerate improvement Engagement: attract, develop and retain world-class team members by being a best place to work POSITION OVERVIEW: We are looking for a highly skilled Senior Database Engineer with 4+ years of hands-on experience in managing and optimizing large-scale, high-throughput database systems. The ideal candidate will possess deep expertise in handling complex ingestion pipelines across multiple data stores and a strong understanding of distributed database architecture. The candidate will play a critical technical leadership role in ensuring our data systems are robust, performant, and scalable to support massive datasets ingested from various sources without bottlenecks. You will work closely with data engineers, platform engineers, and infrastructure teams to continuously improve database performance and reliability. performance bottlenecks. KEY RESPONSIBILITIES: Query Optimization: Design, write, debug and optimize complex queries for RDS (MySQL/PostgreSQL), MongoDB, Elasticsearch, and Cassandra. Large-Scale Ingestion: Configure databases to handle high-throughput data ingestion efficiently. Database Tuning: Optimize database configurations (e.g., memory allocation, connection pooling, indexing) to support large-scale operations. Schema and Index Design: Develop schemas and indexes to ensure efficient storage and retrieval of large datasets. Monitoring and Troubleshooting: Analyze and resolve issues such as slow ingestion rates, replication delays, and performance bottlenecks. Performance Debugging: Analyze and troubleshoot database slowdowns by investigating query execution plans, logs, and metrics. Log Analysis: Use database logs to diagnose and resolve issues related to query performance, replication, and ingestion bottlenecks Data Partitioning and Sharding: Implement partitioning, sharding, and other distributed database techniques to improve scalability. Batch and Real-Time Processing: Optimize ingestion pipelines for both batch and realtime workloads. Collaboration: Partner with data engineers and Kafka experts to design and maintain robust ingestion pipelines. Stay Updated: Stay up to date with the latest advancements in database technologies and recommend improvements REQUIRED SKILLS AND QUALIFICATIONS: Database Expertise: Proven experience with MySQL/PostgreSQL (RDS), MongoDB, Elasticsearch, and Cassandra. High-Volume Operations: Proven experience in configuring and managing databases for large-scale data ingestions. Performance Tuning: Hands-on experience with query optimization, indexing strategies, and execution plan analysis for large datasets. Database Internals: Strong understanding of replication, partitioning, sharding, and caching mechanisms. Data Modeling: Ability to design schemas and data models tailored for high-throughput use cases. Programming Skills: Proficiency in at least one programming language (e.g., Python, Java, Go) for building data pipelines. Debugging Proficiency: Strong ability to debug slowdowns by analyzing database logs, query execution plans, and system metrics. Log Analysis Tools: Familiarity with database log formats and tools for parsing and analyzing logs. Monitoring Tools: Experience with monitoring tools such as AWS CloudWatch, Prometheus, and Grafana to track ingestion performance. Problem-Solving: Analytical skills to diagnose and resolve ingestion-related issues effectively. PREFERRED QUALIFICATIONS: Certification in any of the mentioned database technologies. Hands-on experience with cloud platforms such as AWS (preferred), Azure, or GCP. Knowledge of distributed systems and large-scale data processing. Familiarity with cloud-based database solutions and infrastructure. Familiarity with large scale data ingestion tools like Kafka, Spark or Flink. EDUCATIONAL REQUIREMENTS: Bachelor’s degree in computer science, Information Technology, or a related field. Equivalent work experience will also be considered The above statements describe the general nature and level of work being performed in this job function. They are not intended to be an exhaustive list of all duties, and indeed additional responsibilities may be assigned by Health Catalyst . Studies show that candidates from underrepresented groups are less likely to apply for roles if they don’t have 100% of the qualifications shown in the job posting. While each of our roles have core requirements, please thoughtfully consider your skills and experience and decide if you are interested in the position. If you feel you may be a good fit for the role, even if you don’t meet all of the qualifications, we hope you will apply. If you feel you are lacking the core requirements for this position, we encourage you to continue exploring our careers page for other roles for which you may be a better fit. At Health Catalyst, we appreciate the opportunity to benefit from the diverse backgrounds and experiences of others. Because of our deep commitment to respect every individual, Health Catalyst is an equal opportunity employer.

Posted 1 month ago

Apply

6.0 years

18 - 30 Lacs

India

On-site

Role: Senior Database Administrator (DevOps) Experience: 7+ Type: Contract Job Summary We are seeking a highly skilled and experienced Database Administrator with a minimum of 6 years of hands-on experience managing complex, high-performance, and secure database environments. This role is pivotal in maintaining and optimizing our multi-platform database infrastructure , which includes PostgreSQL, MariaDB/MySQL, MongoDB, MS SQL Server , and AWS RDS/Aurora instances. You will be working primarily within Linux-based production systems (e.g., RHEL 9.x) and will play a vital role in collaborating with DevOps, Infrastructure, and Data Engineering teams to ensure seamless database performance across environments. The ideal candidate has strong experience with infrastructure automation tools like Terraform and Ansible , is proficient with Docker , and is well-versed in cloud environments , particularly AWS . This is a critical role where your efforts will directly impact system stability, scalability, and security across all environments. Key Responsibilities Design, deploy, monitor, and manage databases across production and staging environments. Ensure high availability, performance, and data integrity for mission-critical systems. Automate database provisioning, configuration, and maintenance using Terraform and Ansible. Administer Linux-based systems for database operations with an emphasis on system reliability and uptime. Establish and maintain monitoring systems, set up proactive alerts, and rapidly respond to performance issues or incidents. Work closely with DevOps and Data Engineering teams to integrate infrastructure with MLOps and CI/CD pipelines. Implement and enforce database security best practices, including data encryption, user access control, and auditing. Conduct root cause analysis and tuning to continuously improve database performance and reduce downtime. Required Technical Skills Database Expertise: PostgreSQL: Advanced skills in replication, tuning, backup/recovery, partitioning, and logical/physical architecture. MariaDB/MySQL: Proven experience in high availability configurations, schema optimization, and performance tuning. MongoDB: Strong understanding of NoSQL structures, including indexing strategies, replica sets, and sharding. MS SQL Server: Capable of managing and maintaining enterprise-grade MS SQL Server environments. AWS RDS & Aurora: Deep familiarity with provisioning, monitoring, auto-scaling, snapshot management, and failover handling. Infrastructure & DevOps 6+ years of experience as a Database Administrator or DevOps Engineer in Linux-based environments. Hands-on expertise with Terraform, Ansible, and Infrastructure as Code (IaC) best practices. Knowledge of networking principles, firewalls, VPCs, and security hardening. Experience with monitoring tools such as Datadog, Splunk, SignalFx, and PagerDuty for observability and alerting. Strong working experience with AWS Cloud Services (EC2, VPC, IAM, CloudWatch, S3, etc.). Exposure to other cloud providers like GCP, Azure, or IBM Cloud is a plus. Familiarity with Docker, container orchestration, and integrating databases into containerized environments. Preferred Qualifications Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to collaborate in cross-functional teams and drive initiatives independently. A passion for automation, observability, and scalability in production-grade environments. Must Have: AWS, Ansible, DevOps, Terraform Skills: postgresql,mariadb,datadog,containerization,networking,linux,mongodb,devops,terraform,aws aurora,cloud services,amazon web services (aws),ms sql server,ansible,aws,mysql,aws rds,docker,infrastructure,database

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You will be part of a global IT services organization, Stefanini, which has been established for 37 years with a presence in 41 countries and 88 offices worldwide. With 38,000 employees and growing, Stefanini offers various services including AI, SAP center of excellence, enterprise services, managed services, infra, cloud, support functions, staffing, and more. In India, Stefanini operates in Noida, Pune, and Hyderabad, with a workforce of 1,800 employees over the past 10 years. As a MongoDB administrator on a contract basis for 6 months, you will be responsible for managing, maintaining, and troubleshooting databases in both MongoDB and Cassandra. Your key responsibilities will include: - Installing, configuring, and provisioning MongoDB and Cassandra clusters. - Implementing Sharding and Replication for database systems. - Fine-tuning configurations of MongoDB and Cassandra servers for optimal performance. - Ensuring best practice processes are followed and service levels are met for database systems. - Performance tuning of both MongoDB and Cassandra systems. - Diagnosing and troubleshooting database errors. - Proficiency in Aggregation Framework, Indexing, Replication, Sharding, and Server & Application Administration. - Familiarity with tools like Mongodump, Mongo restore, Filesystem snapshots, and MongoDB cloud manager or equivalent. - Developing Disaster Recovery (DR) and Continuity of Business (COB) plans. - Monitoring Disk usage, CPU, Memory, Read Write Latency Issues, and alerting mechanisms. - Monitoring at Server, Database, Collection Level, and utilizing various monitoring tools related to MongoDB and Cassandra. - Experience in Rolling Database Upgrades. Join our global brand with a work-friendly culture that emphasizes a healthy work-life balance.,

Posted 1 month ago

Apply

11.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Responsibilities Define and drive the overall architecture for scalable, secure, and high-performance distributed systems. Write and review code for critical modules and performance-sensitive components to set quality and architectural standards. Collaborate with engineering leads and product managers to align technology strategy with business goals. Evaluate and recommend tools, technologies, and processes to ensure the highest quality product platform. Own and evolve the system design, ensuring modularity, multi-tenancy, and future extensibility. Establish and govern best practices around service design, API development, security, observability, and performance. Review code, designs, and technical documentation, ensuring adherence to architecture and design principles. Lead design discussions and mentor senior and mid-level engineers to improve design thinking and engineering quality. Partner with DevOps to optimise CI/CD, containerization, and infrastructure-as-code Stay abreast of industry trends and emerging technologies, assessing their relevance and value. Requirements Strong understanding of data structures and algorithms, and a minimum of 11 years of experience. Good knowledge of low-level and high-level system designs and best practices Strong expertise in Java & Spring Boot, with a deep understanding of microservice architectures and design patterns. Good knowledge of databases (both **SQL** and **NoSQL**), including schema design, sharding, and performance tuning. Expertise in **Kubernetes, Helm, and container orchestration** for deploying and managing scalable applications. Advanced knowledge of **Kafka** for stream processing, event-driven architecture, and data integration. Proficiency in **Redis** for caching, session management, and pub-sub use cases. Solid understanding of API design (REST/gRPC), authentication (OAuth2/JWT), and security best practices. Strong grasp of system design fundamentalsscalability, reliability, consistency, and observability. Experience with monitoring and logging frameworks (e. g. Datadog, Prometheus, Grafana, ELK, or equivalent). Excellent problem-solving, communication, and cross-functional leadership skills. Prior experience in leading architecture for SaaS or high-scale multi-tenant platforms is highly desirable. This job was posted by Shivansh Prakash Srivastava Talent Acq from GreyOrange.

Posted 1 month ago

Apply

15.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Job Description The Role : Director, Software Engineering Locations : Gurgaon, Hyderabad & Bangalore The Team: We are building an end-to-end client lifecycle management solution, where technology drives procedural automation and standardization. Our solution includes industry-leading modules and tools widely adopted by financial institutions. This strategic initiative spans the entire client lifecycle—from onboarding to maintenance and offboarding—while providing seamless integration with various in-house products. We leverage a mature technology stack supported by cloud infrastructure, along with the latest advancements in the industry to deliver this solution over a multi-year span. Responsibilities and Impact: Lead a global engineering team across backend, front-end, data, and AI functions, with a focus on modern architectures, AI-driven automation, and cross-jurisdictional data compliance. Design and architect solutions for complex business challenges in the client lifecycle management space, utilizing your extensive experience with a modern technology stack and cloud infrastructure. Provide guidance and technical leadership to development teams on best practices, coding standards, and software design principles, ensuring high-quality outcomes. Demonstrate a deep understanding of existing system architecture (spanning multiple systems) and creatively envision optimal implementations to meet diverse client requirements. Drive participation in all scrum ceremonies, ensuring Agile best practices are effectively followed. Play a key role in the development team to create high-quality, high-performance, and scalable code. Evaluate and recommend new technologies, assisting in their adoption by development teams to enhance productivity and scalability. Collaborate effectively with remote teams in a geographically distributed development model. Communicate clearly and effectively with business stakeholders, building consensus and resolving queries regarding architecture and design. Troubleshoot and resolve complex software issues and defects within the technology stack and cloud-based infrastructure. Foster a professional culture within the team, emphasizing ownership, excellence, quality, and value for customers and the business. Build systems for regulatory checkpoints such as KYC, AML, FATCA/CRS, and LEI. Implement automation across entity matching, data validation, and workflow orchestration using AI and machine learning technologies. Implement agentic AI and advanced language model-based services to streamline onboarding, document processing, and exception handling. Ensure compliance with data privacy, data sovereignty, and regulatory architecture patterns (e.g., regional sharding, zero-data copy patterns). What We’re Looking For: Basic Required Qualifications: 15+ years of experience in the software development lifecycle (SDLC). Strong core Java design skills, including design patterns. Significant experience in designing and executing microservices using modern frameworks and components. Proficient in messaging tools and real-time data pipeline technologies. Expertise in optimizing SQL queries on relational databases. Strong experience with multithreading, data structures, and concurrency scenarios. Proficient in using REST APIs and data formats in creating layered systems. Experience with cloud services and serverless architectures. Familiarity with advanced AI technologies and APIs. Domain knowledge in client onboarding, KYC, and regulatory workflows, with a deep understanding of the client onboarding lifecycle: initiation, due diligence, approvals, legal entity structuring, and regulatory documentation. Hands-on experience with entity resolution and matching frameworks. Proven experience leading a development team on client lifecycle management products. Familiarity with business process management tools related to customization of modelers and engines. Knowledge of data partitioning, regulatory compliance, and the latest UI trends is desirable. Experience with low-code or no-code platforms is a plus. Additional Preferred Qualifications: Bachelor’s degree in computer science or a related field. Proven experience working with or on client lifecycle management and/or KYC workflow solutions, demonstrating a strong grasp of the subject matter. Extensive experience in a team environment following Agile software development principles. Strong interpersonal and written communication skills. Demonstrated ability to successfully manage multiple tasks simultaneously. High energy and a self-starter mentality, with a passion for creative problem-solving.

Posted 1 month ago

Apply

5.0 years

5 - 10 Lacs

Gurgaon

On-site

Manager EXL/M/1435552 ServicesGurgaon Posted On 28 Jul 2025 End Date 11 Sep 2025 Required Experience 5 - 10 Years Basic Section Number Of Positions 1 Band C1 Band Name Manager Cost Code D013514 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 1500000.0000 - 2500000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Analytics - UK & Europe Organization Services LOB Analytics - UK & Europe SBU Analytics Country India City Gurgaon Center EXL - Gurgaon Center 38 Skills Skill JAVA HTML Minimum Qualification B.COM Certification No data available Job Description Job Description: Senior Full Stack Developer Position: Senior Full Stack Developer Location: Gurugram Relevant Experience Required: 8+ years Employment Type: Full-time About the Role We are looking for a Senior Full Stack Developer who can build end-to-end web applications with strong expertise in both front-end and back-end development. The role involves working with Django, Node.js, React, and modern database systems (SQL, NoSQL, and Vector Databases), while leveraging real-time data streaming, AI-powered integrations, and cloud-native deployments. The ideal candidate is a hands-on technologist with a passion for modern UI/UX, scalability, and performance optimization. Key Responsibilities Front-End Development Build responsive and user-friendly interfaces using HTML5, CSS3, JavaScript, and React. Implement modern UI frameworks such as Next.js, Tailwind CSS, Bootstrap, or Material-UI. Create interactive charts and dashboards with D3.js, Recharts, Highcharts, or Plotly. Ensure cross-browser compatibility and optimize for performance and accessibility. Collaborate with designers to translate wireframes and prototypes into functional components. Back-End Development Develop RESTful & GraphQL APIs with Django/DRF and Node.js/Express. Design and implement microservices & event-driven architectures. Optimize server performance and ensure secure API integrations. Database & Data Management Work with structured (PostgreSQL, MySQL) and unstructured databases (MongoDB, Cassandra, DynamoDB). Integrate and manage Vector Databases (Pinecone, Milvus, Weaviate, Chroma) for AI-powered search and recommendations. Implement sharding, clustering, caching, and replication strategies for scalability. Manage both transactional and analytical workloads efficiently. Real-Time Processing & Visualization Implement real-time data streaming with Apache Kafka, Pulsar, or Redis Streams. Build live features (e.g., notifications, chat, analytics) using WebSockets & Server-Sent Events (SSE). Visualize large-scale data in real time for dashboards and BI applications. DevOps & Deployment Deploy applications on cloud platforms (AWS, Azure, GCP). Use Docker, Kubernetes, Helm, and Terraform for scalable deployments. Maintain CI/CD pipelines with GitHub Actions, Jenkins, or GitLab CI. Monitor, log, and ensure high availability with Prometheus, Grafana, ELK/EFK stack. Good to have AI & Advanced Capabilities Integrate state-of-the-art AI/ML models for personalization, recommendations, and semantic search. Implement Retrieval-Augmented Generation (RAG) pipelines with embeddings. Work on multimodal data processing (text, image, and video). Preferred Skills & Qualifications Core Stack Front-End: HTML5, CSS3, JavaScript, TypeScript, React, Next.js, Tailwind CSS/Bootstrap/Material-UI Back-End: Python (Django/DRF), Node.js/Express Databases: PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB, Vector Databases (Pinecone, Milvus, Weaviate, Chroma) APIs: REST, GraphQL, gRPC State-of-the-Art & Advanced Tools Streaming: Apache Kafka, Apache Pulsar, Redis Streams Visualization: D3.js, Highcharts, Plotly, Deck.gl Deployment: Docker, Kubernetes, Helm, Terraform, ArgoCD Cloud: AWS Lambda, Azure Functions, Google Cloud Run Monitoring: Prometheus, Grafana, OpenTelemetry Workflow Workflow Type Back Office

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies