Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 7.0 years
10 - 15 Lacs
Hyderabad
Work from Office
About ValGenesis ValGenesis is a leading digital validation platform provider for life sciences companies. ValGenesis suite of products are used by 30 of the top 50 global pharmaceutical and biotech companies to achieve digital transformation, total compliance and manufacturing excellence/intelligence across their product lifecycle. Learn more about working for ValGenesis, the de facto standard for paperless validation in Life Sciences: https://www.youtube.com/watch?v=tASq7Ld0JsQ About the Role: We are seeking a highly skilled AI/ML Engineer to join our dynamic team to build the next gen applications for our global customers. If you are a technology enthusiast and highly passionate, we are eager to discuss with you about the potential role. Responsibilities Implement, and deploy Machine Learning solutions to solve complex problems and deliver real business value, ie. revenue, engagement, and customer satisfaction. Collaborate with data product managers, software engineers and SMEs to identify AI/ML opportunities for improving process efficiency. Develop production-grade ML models to enhance customer experience, content recommendation, content generation, and predictive analysis. Monitor and improve model performance via data enhancement, feature engineering, experimentation and online/offline evaluation. Stay up-to-date with the latest in machine learning and artificial intelligence, and influence AI/ML for the Life science industry. Stay up-to-date with the latest in machine learning and artificial intelligence, and influence AI/ML for the Life science industry. Requirements 2 - 4 years of experience in AI/ML engineering, with a track record of handling increasingly complex projects. Strong programming skills in Python, Rust. Experience with Pandas, NumPy, SciPy, OpenCV (for image processing) Experience with ML frameworks, such as scikit-learn, Tensorflow, PyTorch. Experience with GenAI tools, such as Langchain, LlamaIndex, and open source Vector DBs. Experience with one or more Graph DBs - Neo4J, ArangoDB Experience with MLOps platforms, such as Kubeflow or MLFlow. Expertise in one or more of the following AI/ML domains: Causal AI, Reinforcement Learning, Generative AI, NLP, Dimension Reduction, Computer Vision, Sequential Models. Expertise in building, deploying, measuring, and maintaining machine learning models to address real-world problems. Thorough understanding of software product development lifecycle, DevOps (build, continuous integration, deployment tools) and best practices. Excellent written and verbal communication skills and interpersonal skills. Advanced degree in Computer Science, Machine Learning or related field. We’re on a Mission In 2005, we disrupted the life sciences industry by introducing the world’s first digital validation lifecycle management system. ValGenesis VLMS® revolutionized compliance-based corporate validation activities and has remained the industry standard. Today, we continue to push the boundaries of innovation enhancing and expanding our portfolio beyond validation with an end-to-end digital transformation platform. We combine our purpose-built systems with world-class consulting services to help every facet of GxP meet evolving regulations and quality expectations. The Team You’ll Join Our customers’ success is our success. We keep the customer experience centered in our decisions, from product to marketing to sales to services to support. Life sciences companies exist to improve humanity’s quality of life, and we honor that mission. We work together. We communicate openly, support each other without reservation, and never hesitate to wear multiple hats to get the job done. We think big. Innovation is the heart of ValGenesis. That spirit drives product development as well as personal growth. We never stop aiming upward. We’re in it to win it. We’re on a path to becoming the number one intelligent validation platform in the market, and we won’t settle for anything less than being a market leader. How We Work Our Chennai, Hyderabad and Bangalore offices are onsite, 5 days per week. We believe that in-person interaction and collaboration fosters creativity, and a sense of community, and is critical to our future success as a company. ValGenesis is an equal-opportunity employer that makes employment decisions on the basis of merit. Our goal is to have the best-qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, religion, sex, sexual orientation, gender identity, national origin, disability, or any other characteristics protected by local law.
Posted 1 month ago
5.0 - 10.0 years
3 - 6 Lacs
Bengaluru
Work from Office
Overview Core Responsibiities -------------------------------------1.Orchestration and AutomationAutomate service activation and management across different network domains, vendors, and ayers.2.Troubeshooting and Probem SovingDiagnose and resove network and service issues, requiring strong anaytica and probem-soving skis.3.Monitoring and Visuaization:Utiize toos to monitor network performance and correate service issues with network events.4.Data Coection and Anaysis:Gather and anayze data from network devices and systems to identify trends and root causes.5.Too Deveopment:Potentiay deveop or enhance toos for monitoring, data coection, and automation.6.Communication and Coaboration:Communicate technica information ceary to both technica and non-technica audiences, and coaborate with goba teams. Responsibiities Required Skis and Experience --------------------------------------- Technica Skis: 1.Strong Linux skis and scripting experience (e.g., Python, She Scripting).2.Experience with network troubeshooting and network management soutions (OSS/BSS).3.Famiiarity with network protocos (e.g., SNMP, Sysog, ICMP, SSH).4.Experience with databases (e.g., PostgreSQL, Neo4j, MySQL).5.Knowedge of Bue Panet products (BPI, BPO, ROA). Soft Skis: 1.Strong anaytica and probem-soving skis.2.Exceent communication (written and verba) skis.3.Abiity to work independenty and as part of a team.4.Abiity to work with a gobay distributed team. Specific Knowedge 1.Understanding of network architecture and technoogies.2.Knowedge of service orchestration principes.3.Experience with Bue Panet MDSO or simiar orchestration patforms.4.Famiiarity with network automation toos and techniques. Educationa 1.Bacheor's degree in Computer Science, Information Technoogy, or a reated fied.2.Reevant certifications (e.g., AWS Certified Soutions Architect, CCNA, Python certifications) are a pus.
Posted 1 month ago
3.0 - 8.0 years
6 - 10 Lacs
Chennai
Work from Office
Overview Java deveopment with hands-on experience in Spring Boot.Strong knowedge of UI frameworks, particuary Anguar, for deveoping dynamic, interactive web appications.Experience with Kubernetes for managing microservices-based appications in a coud environment.Famiiarity with Postgres (reationa) and Neo4j (graph database) for managing compex data modes.Experience in Meta Data Modeing and designing data structures that support high-performance and scaabiity.Expertise in Camunda BPMN and business process automation.Experience impementing rues with Droos Rues Engine.Knowedge of Unix/Linux systems for appication depoyment and management.Experience buiding data Ingestion Frameworks to process and hande arge datasets. Responsibiities Java deveopment with hands-on experience in Spring Boot.Strong knowedge of UI frameworks, particuary Anguar, for deveoping dynamic, interactive web appications.Experience with Kubernetes for managing microservices-based appications in a coud environment.Famiiarity with Postgres (reationa) and Neo4j (graph database) for managing compex data modes.Experience in Meta Data Modeing and designing data structures that support high-performance and scaabiity.Expertise in Camunda BPMN and business process automation.Experience impementing rues with Droos Rues Engine.Knowedge of Unix/Linux systems for appication depoyment and management.Experience buiding data Ingestion Frameworks to process and hande arge datasets. Java deveopment with hands-on experience in Spring Boot.Strong knowedge of UI frameworks, particuary Anguar, for deveoping dynamic, interactive web appications.Experience with Kubernetes for managing microservices-based appications in a coud environment.Famiiarity with Postgres (reationa) and Neo4j (graph database) for managing compex data modes.Experience in Meta Data Modeing and designing data structures that support high-performance and scaabiity.Expertise in Camunda BPMN and business process automation.Experience impementing rues with Droos Rues Engine.Knowedge of Unix/Linux systems for appication depoyment and management.Experience buiding data Ingestion Frameworks to process and hande arge datasets.
Posted 1 month ago
3.0 - 6.0 years
0 - 0 Lacs
Chennai
Work from Office
AI Engineer : We are seeking a specialized AI Engineer to build the core intelligence of InzightEd. This role requires specific expertise in integrating large language models via APIs and structuring data within relational and graph databases. Performance bonus Over time allowance Work from home
Posted 1 month ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Position Summary DT-US Product Engineering - Data Scientist Manager We are seeking an exceptional Data Scientist who combines deep expertise in AI/ML with a strong focus on data quality and advanced analytics. This role requires a proven track record in developing production-grade machine learning solutions, implementing robust data quality frameworks, and leveraging cutting-edge analytical tools to drive business transformation through data-driven insights . Work you will do The Data Scientist will be responsible for developing and implementing end-to-end AI/ML solutions while ensuring data quality excellence across all stages of the data lifecycle. This role requires extensive experience in modern data science platforms, AI frameworks, and analytical tools, with a focus on scalable and production-ready implementations. Project Leadership and Management: Lead complex data science initiatives utilizing Databricks, Dataiku, and modern AI/ML frameworks for end-to-end solution development Establish and maintain data quality frameworks and metrics across all stages of model development Design and implement data validation pipelines and quality control mechanisms for both structured and unstructured data Strategic Development: Develop and deploy advanced machine learning models, including deep learning and generative AI solutions Design and implement automated data quality monitoring systems and anomaly detection frameworks Create and maintain MLOps pipelines for model deployment, monitoring, and maintenance Team Mentoring and Development: Lead and mentor a team of data scientists and analysts, fostering a culture of technical excellence and continuous learning Develop and implement training programs to enhance team capabilities in emerging technologies and methodologies Establish performance metrics and career development pathways for team members Drive knowledge sharing initiatives and best practices across the organization Provide technical guidance and code reviews to ensure high-quality deliverables Data Quality and Governance: Establish data quality standards and best practices for data collection, preprocessing, and feature engineering Implement data validation frameworks and quality checks throughout the ML pipeline Design and maintain data documentation systems and metadata management processes Lead initiatives for data quality improvement and standardization across projects Technical Implementation: Design, develop and deploy end-to-end AI/ML solutions using modern frameworks including TensorFlow, PyTorch, scikit-learn, XGBoost for machine learning, BERT and GPT for NLP, and OpenCV for computer vision applications Architect and implement robust data processing pipelines leveraging enterprise platforms like Databricks, Apache Spark, Pandas for data transformation, Dataiku and Apache Airflow for ETL/ELT processes, and DVC for data version control Establish and maintain production-grade MLOps practices including model deployment, monitoring, A/B testing, and continuous integration/deployment pipelines Technical Expertise Requirements: Must Have: Enterprise AI/ML Platforms: Demonstrate mastery of Databricks for large-scale processing, with proven ability to architect solutions at scale Programming & Analysis: Advanced Python (NumPy, Pandas, scikit-learn), SQL, PySpark with production-level expertise Machine Learning: Deep expertise in TensorFlow or PyTorch, and scikit-learn with proven implementation experience Big Data Technologies: Advanced knowledge of Apache Spark, Databricks, and distributed computing architectures Cloud Platforms: Strong experience with at least one major cloud platform (AWS/Azure/GCP) and their ML services (SageMaker/Azure ML/Vertex AI) Data Processing & Analytics: Extensive experience with enterprise-grade data processing tools and ETL pipelines MLOps & Infrastructure: Proven experience in model deployment, monitoring, and maintaining production ML systems Data Quality: Experience implementing comprehensive data quality frameworks and validation systems Version Control & Collaboration: Strong proficiency with Git, JIRA, and collaborative development practices Database Systems: Expert-level knowledge of both SQL and NoSQL databases for large-scale data management Visualization Tools: Tableau, Power BI, Plotly, Seaborn Large Language Models: Experience with GPT, BERT, LLaMA, and fine-tuning methodologies Good to Have: Additional Programming: R, Julia Additional Big Data: Hadoop, Hive, Apache Kafka Multi-Cloud: Experience across AWS, Azure, and GCP platforms Advanced Analytics: Dataiku, H2O.ai Additional MLOps: MLflow, Kubeflow, DVC (Data Version Control) Data Quality & Validation: Great Expectations, Deequ, Apache Griffin Business Intelligence: SAP HANA, SAP Business Objects, SAP BW Specialized Databases: Cassandra, MongoDB, Neo4j Container Orchestration: Kubernetes, Docker Additional Collaboration Tools: Confluence, BitBucket Education: Advanced degree in quantitative discipline (Statistics, Math, Computer Science, Engineering) or relevant experience. Qualifications: 10-13 years of experience with data mining, statistical modeling tools and underlying algorithms. 5+ years of experience with data analysis software for large scale analysis of structured and unstructured data. Proven track record of leading and delivering large-scale machine learning projects, including production model deployment, data quality framework implementation and experience with very large datasets to create data-driven insights thru predictive and prescriptive analytic models. E xtensive knowledge of supervised and unsupervised analytic modeling techniques such as linear and logistic regression, support vector machines, decision trees / random forests, Naïve-Bayesian, neural networks, association rules, text mining, and k-nearest neighbors among other clustering models. Extensive experience with deep learning frameworks, automated ML platforms, data processing tools (Databricks Delta Lake, Apache Spark), analytics platforms (Tableau, Power BI), and major cloud providers (AWS, Azure, GCP) Experience architecting and implementing enterprise-grade solutions using cloud-native ML services while ensuring cost optimization and performance efficiency Strong track record of team leadership, stakeholder management, and driving technical excellence across multiple concurrent projects Expert-level proficiency in Python, R, and SQL, with deep understanding of statistical analysis, hypothesis testing, feature engineering, model evaluation, and validation techniques in production environments Demonstrated leadership experience in implementing MLOps practices, including model monitoring, A/B testing frameworks, and maintaining production ML systems at scale. Working knowledge of supervised and unsupervised learning techniques, such as Regression/Generalized Linear Models, decision tree analysis, boosting and bagging, Principal Components Analysis, and clustering methods. Strong oral and written communication skills, including presentation skills The Team Information Technology Services (ITS) helps power Deloitte’s success. ITS drives Deloitte, which serves many of the world’s largest, most respected organizations. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. The ~3,000 professionals in ITS deliver services including: Security, risk & compliance Technology support Infrastructure Applications Relationship management Strategy Deployment PMO Financials Communications Product Engineering (PxE) Product Engineering (PxE) team is the internal software and applications development team responsible for delivering leading-edge technologies to Deloitte professionals. Their broad portfolio includes web and mobile productivity tools that empower our people to log expenses, enter timesheets, book travel and more, anywhere, anytime. PxE enables our client service professionals through a comprehensive suite of applications across the business lines. In addition to application delivery, PxE offers full-scale design services, a robust mobile portfolio, cutting-edge analytics, and innovative custom development. Work Location: Hyderabad Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303069 Show more Show less
Posted 1 month ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description We are seeking an extremely passionate and experienced Node.js Backend Developer to join our team and work on building and maintaining a high-performance IoT backend system. You will be responsible for developing robust and scalable APIs, implementing data synchronization services, and integrating with 3rd party SAAS solutions at the platform level. This role is critical to ensuring the performance, reliability, and availability of our backend systems. You'll work on cutting-edge technologies with a real-world impact. Responsibilities Design, develop, and maintain backend services for our IoT platform using Node.js. Develop and optimize high-performance APIs to handle large data volumes & a growing user base. Implement data synchronization services across distributed systems. Integrate 3rd party data sources and APIs into our platform. Work with both SQL and NoSQL databases. Collaborate with the frontend developers and other team members to ensure seamless integration. Troubleshoot and resolve issues related to the backend system. Has to ensure a 99.999% uptime & performance SLAs of the production environment. Manage basic DevOps tasks such as CI/CD pipelines, Kubernetes cluster management, and application deployment processes. Write clean, efficient, and well-documented code with a high test coverage. Apply logical problem-solving skills to address complex challenges. Required Qualifications B.Tech. degree or higher educational qualification. 3+ years of experience as a Node.js developer in a production environment. Proven experience building and maintaining high-performance APIs. Hands-on experience working with SQL and NoSQL databases (e.g., PostgreSQL, ClickHouse). Strong understanding of Microservice architecture concepts & hands-on experience implementing it in production systems. Deep understanding of use cases & experience with Apache Kafka & Redis. Strong understanding of backend development principles and best practices. Familiarity with basic DevOps practices and CI/CD tools. Excellent coding, debugging and logical problem-solving skills. Passion for technology and building innovative solutions. Preferred Qualifications Experience developing APIs & services, with a deeper understanding of the Quality Controls. Knowledge of IoT data and related technologies. Experience with Kubernetes Cluster setup & management is a plus. Experience with Graph DB (e.g. Neo4J). Skills: javascript,node.js,database management,microservices architecture,version control,agile methodologies,problem solving,software testing,apis,backenddevelopment,sql,nosql,postgresql,microservices,kafka,devops,debugging,problemsolving,iot,productionsupport Show more Show less
Posted 1 month ago
6.0 - 11.0 years
10 - 20 Lacs
Bengaluru
Work from Office
Position Overview: We are seeking an experienced and skilled Senior Database Developer to join our dynamic team. The ideal candidate will have at least 8 years of hands-on experience in database development, with a strong focus on Neo4j (graph) databases. The role involves working on cutting-edge projects, contributing to data modelling, and ensuring the scalability and efficiency of our database systems. Responsibilities : Design, develop, and maintain databases, with a primary focus on Cypher/graph databases. Modify databases according to requests and perform tests. Advanced Query, performance tuning of databases and optimization of database systems. Solve database usage issues and malfunctions. Analyze all databases and monitor them for all design specifications and prepare associated test strategies. Evaluate and engineer efficient backup-recovery processes for various databases. Promote uniformity of database-related programming effort by developing methods and procedures for database programming Remain current with the industry by researching available products and methodologies to determine the feasibility of alternative database management systems, communication protocols, middleware, and query tools. Liaise with developers to improve applications and establish best practices. Ensure the performance, security, and scalability of database systems. Develop and optimize PL/SQL queries for efficient data storage and retrieval. Implement and maintain data models, ensuring accuracy and alignment with business needs. Train, mentor and motivate the junior team members. Contribute to assessing the teams performance evaluation. Stay updated on emerging database technologies and contribute to continuous improvement initiatives. Skills Required: 6+ years work experience as a Database developer Bachelor's or master's degree in computer science, Engineering, or a related field. Proficiency in Neo4j (graph) databases is mandatory. Strong experience with PL/SQL, data modeling, and database optimization techniques. Why us? Impactful Work: Your contributions will play a pivotal role in ensuring the quality and reliability of our platform. Professional Growth: We believe in investing in our employees' growth and development. You will have access to various learning resources, books, training programs, and opportunities to enhance your technical skills & expand your knowledge Collaborative Culture: We value teamwork and collaboration. You will work alongside talented professionals from diverse backgrounds, including developers, product managers, and business analysts, to collectively solve challenges and deliver exceptional software. Benefits: Health insurance covered for you and your family. Quarterly team outing, twice a month team lunch & personal and professional learning development session. Top performers win a chance on an international trip completely sponsored by the company.
Posted 1 month ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Location: Chennai,Kolkata,Gurgaon,Bangalore and Pune Experience: 8 -12 Years Work Mode: Hybrid Mandatory Skills: Python, Pyspark, SQL, ETL, Data Pipeline, Azure Databricks, Azure DataFactory, Azure Synapse, Airflow, and Architect Designing,Architect. Overview We are seeking a skilled and motivated Data Engineer with experience in Python, SQL, Azure, and cloud-based technologies to join our dynamic team. The ideal candidate will have a solid background in building and optimizing data pipelines, working with cloud platforms, and leveraging modern data engineering tools like Airflow, PySpark, and Azure Data Engineering. If you are passionate about data and looking for an opportunity to work on cutting-edge technologies, this role is for you! Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: data warehouse,data engineering,etl,data,python,sql,data pipeline,azure synapse,azure datafactory,pipelines,skills,azure databricks,architect designing,pyspark,azure,airflow Show more Show less
Posted 1 month ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Location: Chennai,Kolkata,Gurgaon,Bangalore and Pune Experience: 8 -12 Years Work Mode: Hybrid Mandatory Skills: Python, Pyspark, SQL, ETL, Data Pipeline, Azure Databricks, Azure DataFactory, Azure Synapse, Airflow, and Architect Designing,Architect. Overview We are seeking a skilled and motivated Data Engineer with experience in Python, SQL, Azure, and cloud-based technologies to join our dynamic team. The ideal candidate will have a solid background in building and optimizing data pipelines, working with cloud platforms, and leveraging modern data engineering tools like Airflow, PySpark, and Azure Data Engineering. If you are passionate about data and looking for an opportunity to work on cutting-edge technologies, this role is for you! Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: data warehouse,data engineering,etl,data,python,sql,data pipeline,azure synapse,azure datafactory,pipelines,skills,azure databricks,architect designing,pyspark,azure,airflow Show more Show less
Posted 1 month ago
8.0 years
0 Lacs
Greater Kolkata Area
On-site
Location: Chennai,Kolkata,Gurgaon,Bangalore and Pune Experience: 8 -12 Years Work Mode: Hybrid Mandatory Skills: Python, Pyspark, SQL, ETL, Data Pipeline, Azure Databricks, Azure DataFactory, Azure Synapse, Airflow, and Architect Designing,Architect. Overview We are seeking a skilled and motivated Data Engineer with experience in Python, SQL, Azure, and cloud-based technologies to join our dynamic team. The ideal candidate will have a solid background in building and optimizing data pipelines, working with cloud platforms, and leveraging modern data engineering tools like Airflow, PySpark, and Azure Data Engineering. If you are passionate about data and looking for an opportunity to work on cutting-edge technologies, this role is for you! Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: data warehouse,data engineering,etl,data,python,sql,data pipeline,azure synapse,azure datafactory,pipelines,skills,azure databricks,architect designing,pyspark,azure,airflow Show more Show less
Posted 1 month ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Location: Chennai,Kolkata,Gurgaon,Bangalore and Pune Experience: 8 -12 Years Work Mode: Hybrid Mandatory Skills: Python, Pyspark, SQL, ETL, Data Pipeline, Azure Databricks, Azure DataFactory, Azure Synapse, Airflow, and Architect Designing,Architect. Overview We are seeking a skilled and motivated Data Engineer with experience in Python, SQL, Azure, and cloud-based technologies to join our dynamic team. The ideal candidate will have a solid background in building and optimizing data pipelines, working with cloud platforms, and leveraging modern data engineering tools like Airflow, PySpark, and Azure Data Engineering. If you are passionate about data and looking for an opportunity to work on cutting-edge technologies, this role is for you! Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: data warehouse,data engineering,etl,data,python,sql,data pipeline,azure synapse,azure datafactory,pipelines,skills,azure databricks,architect designing,pyspark,azure,airflow Show more Show less
Posted 1 month ago
0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
Data Modeller JD We are seeking a skilled Data Modeller to join our Corporate Banking team. The ideal candidate will have a strong background in creating data models for various banking services, including Current Account Savings Account (CASA), Loans, and Credit Services. This role involves collaborating with the Data Architect to define data model structures within a data mesh environment and coordinating with multiple departments to ensure cohesive data management practices. Data Modelling: oDesign and develop data models for CASA, Loan, and Credit Services, ensuring they meet business requirements and compliance standards. Create conceptual, logical, and physical data models that support the bank's strategic objectives. Ensure data models are optimized for performance, security, and scalability to support business operations and analytics. Collaboration With Data Architect Work closely with the Data Architect to establish the overall data architecture strategy and framework. Contribute to the definition of data model structures within a data mesh environment. Data Quality And Governance Ensure data quality and integrity in the data models by implementing best practices in data governance. Assist in the establishment of data management policies and standards. Conduct regular data audits and reviews to ensure data accuracy and consistency across systems. Data Modelling Tools: ERwin, IBM InfoSphere Data Architect, Oracle Data Modeler, Microsoft Visio, or similar tools. Databases: SQL, Oracle, MySQL, MS SQL Server, PostgreSQL, Neo4j Graph Data Warehousing Technologies: Snowflake, Teradata, or similar. ETL Tools: Informatica, Talend, Apache NiFi, Microsoft SSIS, or similar. Big Data Technologies: Hadoop, Spark (optional but preferred). Technologies: Experience with data modelling on cloud platforms Microsoft Azure (Synapse, Data Factory) Show more Show less
Posted 1 month ago
0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
Full Stack Developer Key Responsibilities Develop and maintain RESTful APIs using FAST or Flask frameworks. Design, implement, and manage containerized applications using Kubernetes and Docker. Optimize and manage in-memory data structures using Redis. Work with graph databases such as Neo4j or Vector J to manage and query complex data relationships. Collaborate with cross-functional teams to define, design, and ship new features. Write clean, maintainable, and efficient code. Troubleshoot and debug applications. Participate in code reviews to maintain code quality and share knowledge. Core Skills Python: Proficient in Python programming with a strong understanding of its libraries and frameworks. REST End Point API: Experience with developing RESTful APIs using FAST or Flask. Kubernetes/Docker Containers: Hands-on experience with containerization technologies and orchestration tools. Redis: Knowledge of Redis for in-memory data storage and caching. Graph Databases: Experience with Neo4j or Vector J for managing and querying graph data. Show more Show less
Posted 1 month ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Data Engineer – Cloud-Agnostic, Data, Analytics & AI Product Team Location: Hyderabad Employment Type: Full-time Why this role matters Our analytics and AI products are only as good as the data they run on. You will design and operate the pipelines and micro-services that transform multi-structured data into reliable, governed, and instantly consumable assets—regardless of which cloud the customer chooses. Core Skills & Knowledge Programming: Python 3.10+, Pandas or Polars, SQL (ANSI, window functions, CTEs), basic bash. Databases & Warehouses: PostgreSQL, Snowflake (stages, tasks, streams), parquet/Delta-Lake tables on S3/ADLS/GCS. APIs & Services: FastAPI, Pydantic models, OpenAPI specs, JWT/OAuth authentication. Orchestration & Scheduling: Apache Airflow, Dagster, or Prefect; familiarity with event-driven triggers via cloud queues (SQS, Pub/Sub). Cloud Foundations: Hands-on with at least one major cloud (AWS, Azure, GCP) and willingness to write cloud-agnostic code, with a cost-aware development approach. Testing & CI/CD: pytest, GitHub Actions / Azure Pipelines; Docker-first local dev; semantic versioning. Data Governance: Basic understanding of GDPR/PII handling, role-based access, and encryption-at-rest/in-flight. Nice-to-Have / Stretch Skills Streaming ingestion with Kafka / Kinesis / Event Hub and PySpark Structured Streaming. Great Expectations, Soda, or Monte Carlo for data quality monitoring. Graph or time-series stores (Neo4j, TimescaleDB). Experience & Education 6-8 years of overall IT experience with over 4 years of relevant experience building data pipelines or back-end services in production, ideally supporting analytics or ML use-cases. Bachelor’s in Computer Science, Data Engineering, or demonstrably equivalent experience. Show more Show less
Posted 1 month ago
30.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Position Overview ABOUT APOLLO Apollo is a high-growth, global alternative asset manager. In our asset management business, we seek to provide our clients excess return at every point along the risk-reward spectrum from investment grade to private equity with a focus on three investing strategies: yield, hybrid, and equity. For more than three decades, our investing expertise across our fully integrated platform has served the financial return needs of our clients and provided businesses with innovative capital solutions for growth. Through Athene, our retirement services business, we specialize in helping clients achieve financial security by providing a suite of retirement savings products and acting as a solutions provider to institutions. Our patient, creative, and knowledgeable approach to investing aligns our clients, businesses we invest in, our employees, and the communities we impact, to expand opportunity and achieve positive outcomes. OUR PURPOSE AND CORE VALUES Our Clients Rely On Our Investment Acumen To Help Secure Their Future. We Must Never Lose Our Focus And Determination To Be The Best Investors And Most Trusted Partners On Their Behalf. We Strive To Be The leading provider of retirement income solutions to institutions, companies, and individuals. The leading provider of capital solutions to companies. Our breadth and scale enable us to deliver capital for even the largest projects – and our small firm mindset ensures we will be a thoughtful and dedicated partner to these organizations. We are committed to helping them build stronger businesses. A leading contributor to addressing some of the biggest issues facing the world today – such as energy transition, accelerating the adoption of new technologies, and social impact – where innovative approaches to investing can make a positive difference. We are building a unique firm of extraordinary colleagues who: Outperform expectations. Challenge Convention Champion Opportunity Lead responsibly. Drive collaboration As One Apollo team, we believe that doing great work and having fun go hand in hand, and we are proud of what we can achieve together. Our Benefits Apollo relies on its people to keep it a leader in alternative investment management, and the firm’s benefit programs are crafted to offer meaningful coverage for both you and your family. Please reach out to your Human Capital Business Partner for more detailed information on specific benefits. Position Overview At Apollo, we are a global team of alternative investment managers passionate about delivering uncommon value to our investors and shareholders. With over 30 years of proven expertise across Private Equity, Credit, and Real Assets in various regions and industries, we are known for our integrated businesses, our strong investment performance, our value-oriented philosophy, and our people. We seek a Senior Engineer/Full Stack Developer to innovate, manage, direct, architect, design, and implement solutions focused on our trade operations and controller functions across Private Equity, Credit, and Real Assets. The ideal candidate is a well-rounded hands-on engineer passionate about delivering quality software on the Java stack. Our Senior Engineer will work closely with key stakeholders in our Middle Office and Controllers teams and in the Credit and Opportunistic Technology teams to successfully deliver business requirements, projects, and programs. The candidate will have proven skills in independently managing the full software development lifecycle, working with end-users, business analysts, and project managers in defining and refining the problem statement, and delivering quality solutions on time. They will have the aptitude to quickly learn and embrace emerging technologies and proven methodologies to innovate and improve the correctness, quality, and timeliness of solutions delivered by the team. Primary Responsibilities Design elegant solutions for systems that result in simple, extensible, maintainable, high-quality. Provide hands-on technical expertise in architecture, design, development, code reviews, quality assurance, observability, and product support. Use technical knowledge of product design, patterns, and code to identify risks and prevent software defects. Mentor and nurture other team members on doing the above listed at quality. Foster a culture of collaboration, disciplined software engineering practices, and a mindset to leave things better than you found them. Optimize team processes to improve productivity and responsiveness to feedback and changing priorities. Build strong relationships with key stakeholders, collaborate, and communicate effectively to reach successful outcomes. Passionate about delivering high-impact and breakthrough value to stakeholders. Desire to learn the domain and deliver enterprise solutions with at a higher velocity. Manage deliverables from early stages of requirement gathering through development, testing, UAT, deployment and post-production Lead in the planning, execution, and delivery of the team’s commitments. Qualifications & Experience Qualifications & Experience: Master’s or bachelor’s degree in Computer Science or another STEM field Experience with software development in the Alternative Asset Management or Investment Banking domain 8+ years of software development experience in at least one of the following OO languages: Java, C++, or C# 5+ years of Web 2.0 UI/UX development experience in at least one of the following frameworks using JavaScript/TypeScript: ExtJS, ReactJS, AngularJS, or Vue. Hands-on development expertise in Java, Spring Boot, REST, Messaging, JPA, and SQL for the last 4+ years Hands-on development expertise in building applications using RESTful and Microservices architecture Expertise in developing applications using TDD/BDD/ATDD with hands-on experience with at least one of Junit, Spring Test, TestNG, or Cucumber A strong understanding of SOLID principles, Design Patterns, Enterprise Integration Patterns A strong understanding of relational databases, SQL, ER modeling, and ORM technologies A strong understanding of BPM and its application Hands-on experience with various CI/CD practices and tools such as Jenkins, Azure DevOps, TeamCity, etcetera Exceptional problem-solving & debugging skills. Awareness of emerging application development methodologies, design patterns, and technologies. Ability to quickly learn new and emerging technologies and adopt solutions from within the company or the open-source community. Experience with the below will be a plus Buy-side operational and fund accounting processes Business processes and workflows using modern BPM/Low Code/No Code platforms (JBPM, Bonitasoft, Appian, Logic Apps, Unqork, etcetera…) OpenAPI, GraphQL, gRPC, ESB, SOAP, WCF, Kafka, and Node Serverless architecture Microsoft Azure Designing and implementing microservices on AKS Azure DevOps Sencha platform NoSQL databases (MongoDB, Cosmos DB, Neo4J) Python software development Functional programming paradigm Apollo provides equal employment opportunities regardless of age, disability, gender reassignment, marital or civil partner status, pregnancy or maternity, race, color, nationality, ethnic or national origin, religion or belief, veteran status, gender/sex or sexual orientation, or any other criterion or circumstance protected by applicable law, ordinance, or regulation. The above criteria are intended to be used as a guide only – candidates who do not meet all the above criteria may still be considered if they are deemed to have relevant experience/ equivalent levels of skill or knowledge to fulfil the requirements of the role. Any job offer will be conditional upon and subject to satisfactory reference and background screening checks, all necessary corporate and regulatory approvals or certifications as required from time to time and entering into definitive contractual documentation satisfactory to Apollo. Show more Show less
Posted 1 month ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire AI/ML Professionals in the following areas : Designation: Sr. Data Scientist Experience: 4–8 Years Job Type: Full-time We are seeking a highly skilled and motivated Senior Data Scientist to join our dynamic team. In this role, you will leverage your advanced analytical and technical expertise to solve complex business problems and drive impactful data-driven decisions. You will design, develop, and deploy sophisticated machine learning models, conduct in-depth data analyses, and collaborate with cross-functional teams to deliver actionable insights. Responsibilities Design and implement RAG pipelines using LangChain and LangGraph. Integrate AWS Open Source Vector Databases (e.g., OpenSearch with KNN plugin). Handle complex query chaining, prompt orchestration. Work with graph-based knowledge representations (e.g., Neo4j, Stardog). Collaborate with teams to deliver scalable GenAI solutions. Required Skills Proficiency in LLMs, LangChain, and embeddings. Strong background in classification, regression, clustering, and NLP. Knowledge of AWS and DevOps (Docker, Git). Hands-on with FastAPI, model deployment workflows. At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture Show more Show less
Posted 1 month ago
8.0 years
0 Lacs
India
Remote
Job Title-Principal SDE Exp-8+ Years Location-Remote Responsibilities Principal SDE At Shakudo, we’re building the world’s first operating system for data and AI—a unified platform that streamlines powerful open-source and proprietary tools into a seamless, production-ready environment. We’re looking for a Principal Software Development Engineer to lead the development of full end-to-end applications on our platform. This role is ideal for engineers who love solving real customer problems, moving across the stack, and delivering high-impact solutions that showcase what’s possible on Shakudo. What You’ll Do • Design and build complete applications—from backend to frontend—using Shakudo and open-source tools like Neo4J, ollama, Spark, and many more • Solve real-world data and AI challenges with elegant, production-ready solutions • Collaborate with Product and Customer Engineering to translate needs into scalable systems • Drive architecture and design patterns for building on Shakudo—with high autonomy and self-direction • Set the standard for building efficient, reusable, and impactful solutions What You Bring • 8+ years building production systems across the stack • Strong backend and frontend experience (e.g. Python, React, TypeScript) • Familiarity with cloud infrastructure, Kubernetes, and data/AI tooling • A hands-on, solutions-first mindset and a passion for fast, high-quality delivery Why This Role You’ll lead by example, building flagship applications that demonstrate the power of Shakudo. This role offers high ownership, high impact, and the chance to shape how modern data and AI solutions are built. Show more Show less
Posted 1 month ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE_Risk Data Engineer/Leads Description – External Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 3-6 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Ingestion and provisioning of raw datasets, enriched tables, and/or curated, re-usable data assets to enable variety of use cases. Driving improvements in the reliability and frequency of data ingestion including increasing real-time coverage Support and enhancement of data ingestion infrastructure and pipelines. Designing and implementing data pipelines that will collect data from disparate sources across enterprise, and from external sources and deliver it to our data platform. Extract Transform and Load (ETL) workflows, using both advanced data manipulation tools and programmatically manipulation data throughout our data flows, ensuring data is available at each stage in the data flow, and in the form needed for each system, service and customer along said data flow. Identifying and onboarding data sources using existing schemas and where required, conduction exploratory data analysis to investigate and provide solutions. Evaluate modern technologies, frameworks, and tools in the data engineering space to drive innovation and improve data processing capabilities. Core/Must Have Skills. 3-8 years of expertise in designing and implementing data warehouses, data lakes using Oracle Tech Stack (DB: PLSQL) At least 4+ years of experience in Database Design and Dimension modelling using Oracle PLSQL. Should be experience of working PLSQL advanced concepts like ( Materialized views, Global temporary tables, Partitions, PLSQL Packages) Experience in SQL tuning, Tuning of PLSQL solutions, Physical optimization of databases. Experience in writing and tuning SQL scripts including- tables, views, indexes and Complex PLSQL objects including procedures, functions, triggers and packages in Oracle Database 11g or higher. Experience in developing ETL processes – ETL control tables, error logging, auditing, data quality etc. Should be able to implement reusability, parameterization workflow design etc. Advanced working SQL Knowledge and experience working with relational and NoSQL databases as well as working familiarity with a variety of databases (Oracle, SQL Server, Neo4J) Strong analytical and critical thinking skills, with ability to identify and resolve issues in data pipelines and systems. Strong understanding of ETL methodologies and best practices. Collaborate with cross-functional teams to ensure successful implementation of solutions. Experience with OLAP, OLTP databases, and data structuring/modelling with understanding of key data points. Good to have: Experience of working in Financial Crime, Financial Risk and Compliance technology transformation domains. Certification on any cloud tech stack. Experience building and optimizing data pipelines on AWS glue or Oracle cloud. Design and development of systems for the maintenance of the Azure/AWS Lakehouse, ETL process, business Intelligence and data ingestion pipelines for AI/ML use cases. Experience with data visualization (Power BI/Tableau) and SSRS. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 month ago
10.0 years
0 Lacs
Pune, Maharashtra, India
Remote
About Fusemachines Fusemachines is a 10+ year old AI company, dedicated to delivering state-of-the-art AI products and solutions to a diverse range of industries. Founded by Sameer Maskey, Ph.D., an Adjunct Associate Professor at Columbia University, our company is on a steadfast mission to democratize AI and harness the power of global AI talent from underserved communities. With a robust presence in four countries and a dedicated team of over 400 full-time employees, we are committed to fostering AI transformation journeys for businesses worldwide. At Fusemachines, we not only bridge the gap between AI advancement and its global impact but also strive to deliver the most advanced technology solutions to the world. About The Role This is a remote full-time contractual position , working in the Travel & Hospitality Industry , responsible for designing, building, testing, optimizing and maintaining the infrastructure and code required for data integration, storage, processing, pipelines and analytics (BI, visualization and Advanced Analytics) from ingestion to consumption, implementing data flow controls, and ensuring high data quality and accessibility for analytics and business intelligence purposes. This role requires a strong foundation in programming and a keen understanding of how to integrate and manage data effectively across various storage systems and technologies. We're looking for someone who can quickly ramp up, contribute right away and work independently as well as with junior team members with minimal oversight. We are looking for a skilled Sr. Data Engineer with a strong background in Python , SQL , Pyspark , Redshift, and AWS cloud-based large-scale data solutions with a passion for data quality, performance and cost optimization. The ideal candidate will develop in an Agile environment. This role is perfect for an individual passionate about leveraging data to drive insights, improve decision-making, and support the strategic goals of the organization through innovative data engineering solutions. Qualification / Skill Set Requirement: Must have a full-time Bachelor's degree in Computer Science, Information Systems, Engineering, or a related field 5+ years of real-world data engineering development experience in AWS (certifications preferred). Strong expertise in Python, SQL, PySpark and AWS in an Agile environment, with a proven track record of building and optimizing data pipelines, architectures, and datasets, and proven experience in data storage, modelling, management, lake, warehousing, processing/transformation, integration, cleansing, validation and analytics A senior person who can understand requirements and design end-to-end solutions with minimal oversight Strong programming Skills in one or more languages such as Python, Scala, and proficient in writing efficient and optimized code for data integration, storage, processing and manipulation Strong knowledge SDLC tools and technologies, including project management software (Jira or similar), source code management (GitHub or similar), CI/CD system (GitHub actions, AWS CodeBuild or similar) and binary repository manager (AWS CodeArtifact or similar) Good understanding of Data Modelling and Database Design Principles. Being able to design and implement efficient database schemas that meet the requirements of the data architecture to support data solutions Strong SQL skills and experience working with complex data sets, Enterprise Data Warehouse and writing advanced SQL queries. Proficient with Relational Databases (RDS, MySQL, Postgres, or similar) and NonSQL Databases (Cassandra, MongoDB, Neo4j, etc.) Skilled in Data Integration from different sources such as APIs, databases, flat files, and event streaming Strong experience in implementing data pipelines and efficient ELT/ETL processes, batch and real-time, in AWS and using open source solutions, being able to develop custom integration solutions as needed, including Data Integration from different sources such as APIs (PoS integrations is a plus), ERP (Oracle and Allegra are a plus), databases, flat files, Apache Parquet, event streaming, including cleansing, transformation and validation of the data Strong experience with scalable and distributed Data Technologies such as Spark/PySpark, DBT and Kafka, to be able to handle large volumes of data Experience with stream-processing systems: Storm, Spark-Streaming, etc. is a plus Strong experience in designing and implementing Data Warehousing solutions in AWS with Redshift. Demonstrated experience in designing and implementing efficient ELT/ETL processes that extract data from source systems, transform it (DBT), and load it into the data warehouse Strong experience in Orchestration using Apache Airflow Expert in Cloud Computing in AWS, including deep knowledge of a variety of AWS services like Lambda, Kinesis, S3, Lake Formation, EC2, EMR, ECS/ECR, IAM, CloudWatch, etc Good understanding of Data Quality and Governance, including implementation of data quality checks and monitoring processes to ensure that data is accurate, complete, and consistent Good understanding of BI solutions, including Looker and LookML (Looker Modelling Language) Strong knowledge and hands-on experience of DevOps principles, tools and technologies (GitHub and AWS DevOps), including continuous integration, continuous delivery (CI/CD), infrastructure as code (IaC – Terraform), configuration management, automated testing, performance tuning and cost management and optimization Good Problem-Solving skills: being able to troubleshoot data processing pipelines and identify performance bottlenecks and other issues Possesses strong leadership skills with a willingness to lead, create Ideas, and be assertive Strong project management and organizational skills Excellent communication skills to collaborate with cross-functional teams, including business users, data architects, DevOps/DataOps/MLOps engineers, data analysts, data scientists, developers, and operations teams. Essential to convey complex technical concepts and insights to non-technical stakeholders effectively Ability to document processes, procedures, and deployment configurations Responsibilities: Design, implement, deploy, test and maintain highly scalable and efficient data architectures, defining and maintaining standards and best practices for data management independently with minimal guidance Ensuring the scalability, reliability, quality and performance of data systems Mentoring and guiding junior/mid-level data engineers Collaborating with Product, Engineering, Data Scientists and Analysts to understand data requirements and develop data solutions, including reusable components Evaluating and implementing new technologies and tools to improve data integration, data processing and analysis Design architecture, observability and testing strategies, and build reliable infrastructure and data pipelines Takes ownership of storage layer, data management tasks, including schema design, indexing, and performance tuning Swiftly address and resolve complex data engineering issues, incidents and resolve bottlenecks in SQL queries and database operations Conduct a Discovery on the existing Data Infrastructure and Proposed Architecture Evaluate and implement cutting-edge technologies and methodologies, and continue learning and expanding skills in data engineering and cloud platforms, to improve and modernize existing data systems Evaluate, design, and implement data governance solutions: cataloguing, lineage, quality and data governance frameworks that are suitable for a modern analytics solution, considering industry-standard best practices and patterns Define and document data engineering architectures, processes and data flows Assess best practices and design schemas that match business needs for delivering a modern analytics solution (descriptive, diagnostic, predictive, prescriptive) Be an active member of our Agile team, participating in all ceremonies and continuous improvement activities Fusemachines is an Equal opportunity employer, committed to diversity and inclusion. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or any other characteristic protected by applicable federal, state, or local laws. Powered by JazzHR SC1hyFVwpp Show more Show less
Posted 1 month ago
5.0 - 8.0 years
15 - 25 Lacs
Hyderabad
Work from Office
Roles and Responsibilities Design, develop, test, and deploy full-stack applications using Node.js, Express, MongoDB, Redis, React.js, AWS, Nestjs, SQL, Microservices, REST, JSON. Collaborate with cross-functional teams to identify requirements and deliver high-quality solutions. Develop scalable and efficient algorithms for data processing and storage management. Ensure seamless integration of multiple services through API design and implementation. Participate in code reviews to maintain coding standards and best practices. Desired Candidate Profile 5-8 years of experience as a Full Stack Software Engineer with expertise in at least two programming languages (Node.js & React.js). Bachelor's degree in Any Specialization (B.Tech/B.E.). Strong understanding of software development life cycle including design patterns, testing methodologies, version control systems (Git), continuous integration/continuous deployment pipelines (CI/CD). Proficiency in working with databases such as MySQL or NoSQL databases like MongoDB.
Posted 1 month ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Our Company Changing the world through digital experiences is what Adobe’s all about! We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences. We’re passionate about empowering people to craft alluring and powerful images, videos, and apps, and transform how companies harmonize with customers across every screen. We’re on a mission to hire the very best and are committed to building exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new insights can come from everywhere in the organization, and we know the next big idea could be yours. The Opportunity Use your expertise in data science engineering to drive the next stage of growth at Adobe. The Customer Analytics & GTM team is focused on using the power of data to deliver optimized experiences through personalization. This role will drive data engineering for large-scale data science initiatives across a wide variety of strategic projects. As a member of the Data Science Engineering team, you will have significant responsibility to help build large scale cloud-based data and analytics platform with enterprise-wide consumers. This role is inherently multi-functional, and the ideal candidate will work across teams. The position requires ability to own things, come up with innovative solutions, try new tools, technologies, and entrepreneurial personality. Come join us for a truly exciting career, best benefits and outstanding work life balance. What You Will Do Build fault tolerant, scalable, quality data pipelines using multiple cloud- based tools. Develop analytical, personalization capabilities using pioneering technologies by bringing to bear Adobe tools. Build LLM agents to optimize and automate data pipelines following the best engineering practices. Deliver End to End Data Pipelines to run Machine Learning Models in a production platform. Innovative solutions to help broader organization take significant actions fast and efficiently. Chip into data engineering and data science frameworks, tools, and processes. Implement outstanding data operations and implement standard methodologies to use resources in an optimum way. Architect data ingestion, data transformation, data consumption, data governance frameworks. Help build production grade ML models and integration with operational systems. This is a high visibility role for a team which is on a critical mission to stop software privacy. A lot of collaboration with global multi-functional operations teams is required to onboard the customers to use genuine software. Work in a collaborative environment and contribute to the team as well as organization’s success. What You Will Need Bachelor’s degree in computer science or equivalent. Master’s degree or equivalent experience is preferred. 5-8 years of consistent track record as a data engineer. At least 2+ years of demonstrable experience and proven track record with Mobile data ecosystem is a must. App Store Optimization (ASO), 3rd Party systems like Branch, Revenue Cat, Google and Apple APIs etc. building data pipelines for In App purchases, Paywall impressions and tracking, App crashes etc. 5+ years validated ability in distributed data technologies e.g., Hadoop, Hive, Presto, Spark etc. 3+ years of experience with Cloud based technologies – Databricks, S3, Azure Blob Storage, Notebooks, AWS EMR, Athena, Glue etc. Familiarity and usage of different file formats in batch/streaming processing i.e., Delta/Parquet/ORC etc. 2+ years’ experience with streaming data ingestion and transformation using Kafka, Kinesis etc. Outstanding SQL experience. Ability to write optimized SQLs across platforms. Proven hands-on experience in Python/PySpark/Scala and ability to manipulate data using Pandas, NumPy, Koalas etc. and using APIs to transfer data. Experience working as an architect to design large scale distributed data platforms. Experience with CI/CD tools i.e., GitHub, Jenkins etc. Working experience with Open- source orchestration tools i.e., Apache Air Flow/ Azkaban etc. Teammate with excellent communication/teamwork skills when it comes to closely working with data scientists and machine learning engineers daily. Hands-on work experience with Elastic Stack (Elastic, Logstash, Kibana) and Graph Databases (neo4j, Neptune etc.) is highly desired. Work experience with ML algorithms & frameworks i.e., Keras, Tensor Flow, PyTorch, XGBoost, Linear Regression, Classification, Random Forest, Clustering, mlFlow etc. Nice to have Showcase your work if you are an open - source contributor. Passion to contribute to Open-source community is highly valued. Experience with Data Governance tools e.g., Collibra and Collaboration tools e.g., JIRA/ Confluence etc. Familiarity with Adobe tools like Adobe Experience Platform, Adobe Analytics, Customer Journey Analytics, Adobe Journey Optimizer is a plus. Experience with LLM Models/ Agentic workflows using Copilot, Claude, LLAMA, Databricks Genie etc. is highly preferred. Opportunity and affirmative action employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015. Adobe values a free and open marketplace for all employees and has policies in place to ensure that we do not enter into illegal agreements with other companies to not recruit or hire each other’s employees. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more about our vision here. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015. Show more Show less
Posted 1 month ago
10.0 years
4 - 7 Lacs
Thiruvananthapuram
On-site
Overview: The Technology Solution Delivery - Front Line Manager (M1) is responsible for providing leadership and day-to-day direction to a cross functional engineering team. This role involves establishing and executing operational plans, managing relationships with internal and external customers, and overseeing technical fulfillment projects. The manager also supports sales verticals in customer interactions and ensures the delivery of technology solutions aligns with business needs. What you will do: Build strong relationships with both internal and external stakeholders including product, business and sales partners. Demonstrate excellent communication skills with the ability to both simplify complex problems and also dive deeper if needed Manage teams with cross functional skills that include software, quality, reliability engineers, project managers and scrum masters. Mentor, coach and develop junior and senior software, quality and reliability engineers. Collaborate with the architects, SRE leads and other technical leadership on strategic technical direction, guidelines, and best practices Ensure compliance with EFX secure software development guidelines and best practices and responsible for meeting and maintaining QE, DevSec, and FinOps KPIs. Define, maintain and report SLA, SLO, SLIs meeting EFX engineering standards in partnership with the product, engineering and architecture teams Drive technical documentation including support, end user documentation and run books. Lead Sprint planning, Sprint Retrospectives, and other team activities Implement architecture decision making associated with Product features/stories, refactoring work, and EOSL decisions Create and deliver technical presentations to internal and external technical and non-technical stakeholders communicating with clarity and precision, and present complex information in a concise format that is audience appropriate Provides coaching, leadership and talent development; ensures teams functions as a high-performing team; able to identify performance gaps and opportunities for upskilling and transition when necessary. Drives culture of accountability through actions and stakeholder engagement and expectation management Develop the long-term technical vision and roadmap within, and often beyond, the scope of your teams. Oversee systems designs within the scope of the broader area, and review product or system development code to solve ambiguous problems Identify and resolve problems affecting day-to-day operations Set priorities for the engineering team and coordinate work activities with other supervisors Cloud Certification Strongly Preferred What experience you need: BS or MS degree in a STEM major or equivalent job experience required 10+ years’ experience in software development and delivery You adore working in a fast paced and agile development environment You possess excellent communication, sharp analytical abilities, and proven design skills You have detailed knowledge of modern software development lifecycles including CI / CD You have the ability to operate across a broad and complex business unit with multiple stakeholders You have an understanding of the key aspects of finance especially as related to Technology. Specifically including total cost of ownership and value You are a self-starter, highly motivated, and have a real passion for actively learning and researching new methods of work and new technology You possess excellent written and verbal communication skills with the ability to communicate with team members at various levels, including business leaders What could set you apart UI development (e.g. HTML, JavaScript, AngularJS, Angular4/5 and Bootstrap) Source code control management systems (e.g. SVN/Git, Subversion) and build tools like Maven Big Data, Postgres, Oracle, MySQL, NoSQL databases (e.g. Cassandra, Hadoop, MongoDB, Neo4J) Design patterns Agile environments (e.g. Scrum, XP) Software development best practices such as TDD (e.g. JUnit), automated testing (e.g. Gauge, Cucumber, FitNesse), continuous integration (e.g. Jenkins, GoCD) Linux command line and shell scripting languages Relational databases (e.g. SQL Server, MySQL) Cloud computing, SaaS (Software as a Service) Atlassian tooling (e.g. JIRA, Confluence, and Bitbucket) Experience working in financial services Experience working with open source frameworks; preferably Spring, though we would also consider Ruby, Apache Struts, Symfony, Django, etc. Automated Testing: JUnit, Selenium, LoadRunner, SoapUI Behaviors: Customer-focused with a drive to exceed expectations. Demonstrates integrity and accountability. Intellectually curious and driven to innovate. Values diversity and fosters collaboration. Results-oriented with a sense of urgency and agility.
Posted 1 month ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Description Are You Ready to Make It Happen at Mondelēz International? Join our Mission to Lead the Future of Snacking. Make It With Pride. Together with analytics team leaders you will support our business with excellent data models to uncover trends that can drive long-term business results. How You Will Contribute You will: Execute the business analytics agenda in conjunction with analytics team leaders Work with best-in-class external partners who leverage analytics tools and processes Use models/algorithms to uncover signals/patterns and trends to drive long-term business performance Execute the business analytics agenda using a methodical approach that conveys to stakeholders what business analytics will deliver What You Will Bring A desire to drive your future and accelerate your career and the following experience and knowledge: Using data analysis to make recommendations to analytic leaders Understanding in best-in-class analytics practices Knowledge of Indicators (KPI's) and scorecards Knowledge of BI tools like Tableau, Excel, Alteryx, R, Python, etc. is a plus Are You Ready to Make It Happen at Mondelēz International? Join our Mission to Lead the Future of Snacking. Make It with Pride In This Role As a DaaS Data Engineer, you will have the opportunity to design and build scalable, secure, and cost-effective cloud-based data solutions. You will develop and maintain data pipelines to extract, transform, and load data into data warehouses or data lakes, ensuring data quality and validation processes to maintain data accuracy and integrity. You will ensure efficient data storage and retrieval for optimal performance, and collaborate closely with data teams, product owners, and other stakeholders to stay updated with the latest cloud technologies and best practices. Role & Responsibilities: Design and Build: Develop and implement scalable, secure, and cost-effective cloud-based data solutions. Manage Data Pipelines: Develop and maintain data pipelines to extract, transform, and load data into data warehouses or data lakes. Ensure Data Quality: Implement data quality and validation processes to ensure data accuracy and integrity. Optimize Data Storage: Ensure efficient data storage and retrieval for optimal performance. Collaborate and Innovate: Work closely with data teams, product owners, and stay updated with the latest cloud technologies and best practices to remain current in the field. Technical Requirements: Programming: Python, PySpark, Go/Java Database: SQL, PL/SQL ETL & Integration: DBT, Databricks + DLT, AecorSoft, Talend, Informatica/Pentaho/Ab-Initio, Fivetran. Data Warehousing: SCD, Schema Types, Data Mart. Visualization: Databricks Notebook, PowerBI, Tableau, Looker. GCP Cloud Services: Big Query, GCS, Cloud Function, PubSub, Dataflow, DataProc, Dataplex. AWS Cloud Services: S3, Redshift, Lambda, Glue, CloudWatch, EMR, SNS, Kinesis. Supporting Technologies: Graph Database/Neo4j, Erwin, Collibra, Ataccama DQ, Kafka, Airflow. Experience with RGM.ai product would have an added advantage. Soft Skills: Problem-Solving: The ability to identify and solve complex data-related challenges. Communication: Effective communication skills to collaborate with Product Owners, analysts, and stakeholders. Analytical Thinking: The capacity to analyse data and draw meaningful insights. Attention to Detail: Meticulousness in data preparation and pipeline development. Adaptability: The ability to stay updated with emerging technologies and trends in the data engineering field. Within Country Relocation support available and for candidates voluntarily moving internationally some minimal support is offered through our Volunteer International Transfer Policy Business Unit Summary At Mondelēz International, our purpose is to empower people to snack right by offering the right snack, for the right moment, made the right way. That means delivering a broad range of delicious, high-quality snacks that nourish life's moments, made with sustainable ingredients and packaging that consumers can feel good about. We have a rich portfolio of strong brands globally and locally including many household names such as Oreo , belVita and LU biscuits; Cadbury Dairy Milk , Milka and Toblerone chocolate; Sour Patch Kids candy and Trident gum. We are proud to hold the top position globally in biscuits, chocolate and candy and the second top position in gum. Our 80,000 makers and bakers are located in more than 80 countries and we sell our products in over 150 countries around the world. Our people are energized for growth and critical to us living our purpose and values. We are a diverse community that can make things happen—and happen fast. Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. Job Type Regular Analytics & Modelling Analytics & Data Science Show more Show less
Posted 1 month ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Company Description NXP Semiconductors enables secure connections and infrastructure for a smarter world, advancing solutions that make lives easier, better and safer. As the world leader in secure connectivity solutions for embedded applications, we are driving innovation in the secure connected vehicle, end-to-end security & privacy and smart connected solutions markets. Organization Description Do you feel challenged by being part of the IT department of NXP, the company with a mission of “Secure Connections for a Smarter World”? Do you perform best in a role representing IT in projects in a fast moving, international environment? Within R&D IT Solutions, the Product Creation Applications (PCA) department is responsible for providing and supporting the R&D design community globally with best-in-class applications and support. The applications are used by over 6,000 designers. Job Summary As a Graph Engineer, you will: Develop pipelines and code to support the ingress and egress of this data to and from the knowledge graphs. Perform basic and advanced graph querying and data modeling on the knowledge graphs that lie at the heart of the organization's Product Creation ecosystem. Maintain the (ETL) pipelines, code and Knowledge Graph to stay scalable, resilient and performant in line with customer’s requirements. Work in an international and Agile DevOps environment. This position offers an opportunity to work in a globally distributed team where you will get a unique opportunity of personal development in a multi-cultural environment. You will also get a challenging environment to develop expertise in the technologies useful in the industry. Primary Responsibilities Translate requirements of business functions into “Graph-Thinking”. Build and maintain graphs and related applications from data and information, using latest graph technologies to leverage high value use cases. Support and manage graph databases. Integrate graph data from various sources – internal and external. Extract data from various sources, including databases, APIs, and flat files. Load data into target systems, such as data warehouses and data lakes. Develop code to move data (ETL) from the enterprise platform applications into the enterprise knowledge graphs. Optimize ETL processes for performance and scalability. Collaborate with data engineers, data scientists and other stakeholders to model the graph environment to best represent the data coming from the multiple enterprise systems. Skills / Experience Semantic Web technologies: RDF RDFS, OWL, SHACL SPARQL JSON-LD, N-Triples/N-Quads, Turtle, RDF/XML, TriX API-led architectures REST, SOAP Microservices API Management Graph databases, such as Dydra, Amazon Neptune, Neo4J, Oracle Spatial & Graph is a plus Experience with other NoSQL databases, such as key-value databases and document-based databases (e.g. XML databases) is a plus Experience with relational databases Programming experience, preferably Java, JavaScript, Python, PL/SQL Experience with web technologies: HTML, CSS, XML, XSLT, XPath Experience with modelling languages such as UML Understanding of CI/CD automation, version control, build automation, testing frameworks, static code analysis, IT service management, artifact management, container management, and experience with related tools and platforms. Familiarity with Cloud computing concepts (e.g. in AWS and Azure). Education & Personal Skillsets A master’s or bachelor’s degree in the field of computer science, mathematics, electronics engineering or related discipline with at least 10 plus years of experience in a similar role Excellent problem-solving and analytical skills A growth mindset with a curiosity to learn and improve. Team player with strong interpersonal, written, and verbal communication skills. Business consulting and technical consulting skills. An entrepreneurial spirit and the ability to foster a positive and energized culture. You can demonstrate fluent communication skills in English (spoken and written). Experience working in Agile (Scrum knowledge appreciated) with a DevOps mindset. More information about NXP in India... Show more Show less
Posted 1 month ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Tata Consultancy Services is hiring Python Full stack Developers !!! Role**Python Full stack Developer Desired Experience Range**6-8 YEARS Location of Requirement**Hyderabad And Kolkata Desired Skills -Technical/Behavioral Frontend o 6+ years of overall experience with proficiency in React (2+ years), Typescript (1+ year), React hooks (1+ year) o Experience with ESlint, CSS in JS styling (preferably Emotion), state management (preferably Redux), and JavaScript bundlers such as Webpack o Experience with integrating with RESTful APIs or other web services Backend o Expertise with Python (3+ years, preferably Python3) o Proficiency with a Python web framework (2+ years, preferably flask and FastAPI) o Experience with a Python linter (preferably flake8), graph databases (preferably Neo4j), a package manager (preferably pip), Elasticsearch, and Airflow o Experience with developing microservices, RESTful APIs or other web services o Experience with Database design and management, including NoSQL/RDBMS tradeoffs Interested and eligible candidates can apply !!! Show more Show less
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough