Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
If you are looking for a career at a dynamic company with a people-first mindset and a deep culture of growth and autonomy, ACV is the right place for you! Competitive compensation packages and learning and development opportunities, ACV has what you need to advance to the next level in your career. We will continue to raise the bar every day by investing in our people and technology to help our customers succeed. We hire people who share our passion, bring innovative ideas to the table, and enjoy a collaborative atmosphere. Who we are: ACV is a technology company that has revolutionized how dealers buy and sell cars online. We are transforming the automotive industry. ACV Auctions Inc. (ACV), has applied innovation and user-designed, data driven applications and solutions. We are building the most trusted and efficient digital marketplace with data solutions for sourcing, selling and managing used vehicles with transparency and comprehensive insights that were once unimaginable. We are disruptors of the industry and we want you to join us on our journey. Our network of brands include ACV Auctions, ACV Transportation, ClearCar, MAX Digital and ACV Capital within its Marketplace Products, as well as, True360 and Data Services. ACV Auctions in Chennai, India are looking for talented individuals to join our team. As we expand our platform, we're offering a wide range of exciting opportunities across various roles in corporate, operations, and product and technology. Our global product and technology organization spans product management, engineering, data science, machine learning, DevOps and program leadership. What unites us is a deep sense of customer centricity, calm persistence in solving hard problems, and a shared passion for innovation. If you're looking to grow, lead, and contribute to something larger than yourself, we'd love to have you on this journey. Let's build something extraordinary together. Join us in shaping the future of automotive! At ACV we focus on the Health, Physical, Financial, Social and Emotional Wellness of our Teammates and to support this we offer industry leading benefits and wellness programs. Who we are looking for: The data engineering team's mission is to provide high availability and high resiliency as a core service to our ACV applications. The team is responsible for ETL’s using different ingestion and transformation techniques. We are responsible for a range of critical tasks aimed at ensuring smooth and efficient functioning and high availability of ACVs data platforms. We are a crucial bridge between Infrastructure Operations, Data Infrastructure, Analytics, and Development teams providing valuable feedback and insights to continuously improve platform reliability, functionality, and overall performance. We are seeking a talented data professional as a Data Engineer III to join our Data Engineering team. This role requires a strong focus and experience in software development, multi-cloud based technologies, in memory data stores, and a strong desire to learn complex systems and new technologies. It requires a sound foundation in database and infrastructure architecture, deep technical knowledge, software development, excellent communication skills, and an action-based philosophy to solve hard software engineering problems. What you will do: As a Data Engineer at ACV Auctions, you HAVE FUN !! You will design, develop, write, and modify code. You will be responsible for development of ETLs, application architecture, optimizing databases & SQL queries. You will work alongside other data engineers and data scientists in the design and development of solutions to ACV’s most complex software problems. It is expected that you will be able to operate in a high performing team, that you can balance high quality delivery with customer focus, and that you will have a record of delivering and guiding team members in a fast-paced environment. Design, develop, and maintain scalable ETL pipelines using Python and SQL to ingest, process, and transform data from diverse sources. Write clean, efficient, and well-documented code in Python and SQL. Utilize Git for version control and collaborate effectively with other engineers. Implement and manage data orchestration workflows using industry-standard orchestration tools (e.g., Apache Airflow, Prefect).. Apply a strong understanding of major data structures (arrays, dictionaries, strings, trees, nodes, graphs, linked lists) to optimize data processing and storage. Support multi-cloud application development. Contribute, influence, and set standards for all technical aspects of a product or service including but not limited to, testing, debugging, performance, and languages. Support development stages for application development and data science teams, emphasizing in MySQL and Postgres database development. Influence companywide engineering standards for tooling, languages, and build systems. Leverage monitoring tools to ensure high performance and availability; work with operations and engineering to improve as required. Ensure that data development meets company standards for readability, reliability, and performance. Collaborate with internal teams on transactional and analytical schema design. Conduct code reviews, develop high-quality documentation, and build robust test suites Respond-to and troubleshoot highly complex problems quickly, efficiently, and effectively. Participate in engineering innovations including discovery of new technologies, implementation strategies, and architectural improvements. Participate in on-call rotation What you will need: Bachelor’s degree in computer science, Information Technology, or a related field (or equivalent work experience) Ability to read, write, speak, and understand English. 3+ years of experience programming in Python 3+ years of experience with ETL workflow implementation (Airflow, Python) 3+ years work with continuous integration and build tools. 2+ year of experience with Cloud platforms preferably in AWS or GCP Knowledge of database architecture, infrastructure, performance tuning, and optimization techniques. Knowledge in day-day tools and how they work including deployments, k8s, monitoring systems, and testing tools. Proficient in version control systems including trunk-based development, multiple release planning, cherry picking, and rebase. Proficient in databases (RDB), SQL, and can contribute to table definitions. Self-sufficient debugger who can identify and solve complex problems in code. Deep understanding of major data structures (arrays, dictionaries, strings). Experience with Domain Driven Design. Experience with containers and Kubernetes. Experience with database monitoring and diagnostic tools, preferably Data Dog. Hands-on skills and the ability to drill deep into the complex system design and implementation. Proficiency in SQL query writing and optimization. Familiarity with database security principles and best practices. Familiarity with in-memory data processing Knowledge of data warehousing concepts and technologies, including dimensional modeling and ETL frameworks Strong communication and collaboration skills, with the ability to work effectively in a fast paced global team environment. Experience working with: SQL data-layer development experience; OLTP schema design Using and integrating with cloud services, specifically: AWS RDS, Aurora, S3, GCP Github, Jenkins, Python Nice to Have Qualifications: Experience with Airflow, Docker, Visual Studio, Pycharm, Redis, Kubernetes, Fivetran, Spark, Dataflow, Dataproc, EMR Experience with database monitoring and diagnostic tools, preferably DataDog. Hands-on experience with Kafka or other event streaming technologies. Hands-on experience with micro-service architecture Our Values Trust & Transparency | People First | Positive Experiences | Calm Persistence | Never Settling At ACV, we are committed to an inclusive culture in which every individual is welcomed and empowered to celebrate their true selves. We achieve this by fostering a work environment of acceptance and understanding that is free from discrimination. ACV is committed to being an equal opportunity employer regardless of sex, race, creed, color, religion, marital status, national origin, age, pregnancy, sexual orientation, gender, gender identity, gender expression, genetic information, disability, military status, status as a veteran, or any other protected characteristic. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you have a disability or special need that requires reasonable accommodation, please let us know. Data Processing Consent When you apply to a job on this site, the personal data contained in your application will be collected by ACV Auctions Inc. and/or one of its subsidiaries ("ACV Auctions"). By clicking "apply", you hereby provide your consent to ACV Auctions and/or its authorized agents to collect and process your personal data for purpose of your recruitment at ACV Auctions and processing your job application. ACV Auctions may use services provided by a third party service provider to help manage its recruitment and hiring process. For more information about how your personal data will be processed by ACV Auctions and any rights you may have, please review ACV Auctions' candidate privacy notice here. If you have any questions about our privacy practices, please contact datasubjectrights@acvauctions.com.
Posted 1 month ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
If you are looking for a career at a dynamic company with a people-first mindset and a deep culture of growth and autonomy, ACV is the right place for you! Competitive compensation packages and learning and development opportunities, ACV has what you need to advance to the next level in your career. We will continue to raise the bar every day by investing in our people and technology to help our customers succeed. We hire people who share our passion, bring innovative ideas to the table, and enjoy a collaborative atmosphere. Who we are: ACV is a technology company that has revolutionized how dealers buy and sell cars online. We are transforming the automotive industry. ACV Auctions Inc. (ACV), has applied innovation and user-designed, data driven applications and solutions. We are building the most trusted and efficient digital marketplace with data solutions for sourcing, selling and managing used vehicles with transparency and comprehensive insights that were once unimaginable. We are disruptors of the industry and we want you to join us on our journey. Our network of brands include ACV Auctions, ACV Transportation, ClearCar, MAX Digital and ACV Capital within its Marketplace Products, as well as, True360 and Data Services. ACV Auctions in Chennai, India are looking for talented individuals to join our team. As we expand our platform, we're offering a wide range of exciting opportunities across various roles in corporate, operations, and product and technology. Our global product and technology organization spans product management, engineering, data science, machine learning, DevOps and program leadership. What unites us is a deep sense of customer centricity, calm persistence in solving hard problems, and a shared passion for innovation. If you're looking to grow, lead, and contribute to something larger than yourself, we'd love to have you on this journey. Let's build something extraordinary together. Join us in shaping the future of automotive! At ACV we focus on the Health, Physical, Financial, Social and Emotional Wellness of our Teammates and to support this we offer industry leading benefits and wellness programs. Who we are looking for: The data engineering team's mission is to provide high availability and high resiliency as a core service to our ACV applications. The team is responsible for ETL’s using different ingestion and transformation techniques. We are responsible for a range of critical tasks aimed at ensuring smooth and efficient functioning and high availability of ACVs data platforms. We are a crucial bridge between Infrastructure Operations, Data Infrastructure, Analytics, and Development teams providing valuable feedback and insights to continuously improve platform reliability, functionality, and overall performance. We are seeking a talented data professional as a Senior Data Engineer to join our Data Engineering team. This role requires a strong focus and experience in software development, multi-cloud based technologies, in memory data stores, and a strong desire to learn complex systems and new technologies. It requires a sound foundation in database and infrastructure architecture, deep technical knowledge, software development, excellent communication skills, and an action-based philosophy to solve hard software engineering problems. What you will do: As a Data Engineer at ACV Auctions you HAVE FUN !! You will design, develop, write, and modify code. You will be responsible for development of ETLs, application architecture, optimizing databases & SQL queries. You will work alongside other data engineers and data scientists in the design and development of solutions to ACV’s most complex software problems. It is expected that you will be able to operate in a high performing team, that you can balance high quality delivery with customer focus, and that you will have a record of delivering and guiding team members in a fast-paced environment. Design, develop, and maintain scalable ETL pipelines using Python and SQL to ingest, process, and transform data from diverse sources. Write clean, efficient, and well-documented code in Python and SQL. Utilize Git for version control and collaborate effectively with other engineers. Implement and manage data orchestration workflows using industry-standard orchestration tools (e.g., Apache Airflow, Prefect).. Apply a strong understanding of major data structures (arrays, dictionaries, strings, trees, nodes, graphs, linked lists) to optimize data processing and storage. Support multi-cloud application development. Contribute, influence, and set standards for all technical aspects of a product or service including but not limited to, testing, debugging, performance, and languages. Support development stages for application development and data science teams, emphasizing in MySQL and Postgres database development. Influence company wide engineering standards for tooling, languages, and build systems. Leverage monitoring tools to ensure high performance and availability; work with operations and engineering to improve as required. Ensure that data development meets company standards for readability, reliability, and performance. Collaborate with internal teams on transactional and analytical schema design. Conduct code reviews, develop high-quality documentation, and build robust test suites Respond-to and troubleshoot highly complex problems quickly, efficiently, and effectively. Mentor junior data engineers. Assist/lead technical discussions/innovation including engineering tech talks Assist in engineering innovations including discovery of new technologies, implementation strategies, and architectural improvements. Participate in on-call rotation What you will need: Bachelor’s degree in computer science, Information Technology, or a related field (or equivalent work experience) Ability to read, write, speak, and understand English. 4+ years of experience programming in Python 3+ years of experience with ETL workflow implementation (Airflow, Python) 3+ years work with continuous integration and build tools. 3+ years of experience with Cloud platforms preferably in AWS or GCP Knowledge of database architecture, infrastructure, performance tuning, and optimization techniques. Deep Knowledge in day-day tools and how they work including deployments, k8s, monitoring systems, and testing tools. Proficient in databases (RDB), SQL, and can contribute to schema definitions. Self-sufficient debugger who can identify and solve complex problems in code. Deep understanding of major data structures (arrays, dictionaries, strings). Experience with Domain Driven Design. Experience with containers and Kubernetes. Experience with database monitoring and diagnostic tools, preferably Data Dog. Hands-on skills and the ability to drill deep into the complex system design and implementation. Proficiency in SQL query writing and optimization. Experience with database security principles and best practices. Experience with in-memory data processing Experience working with data warehousing concepts and technologies, including dimensional modeling and ETL frameworks Strong communication and collaboration skills, with the ability to work effectively in a fast paced global team environment. Experience working with: SQL data-layer development experience; OLTP schema design Using and integrating with cloud services, specifically: AWS RDS, Aurora, S3, GCP Github, Jenkins, Python, Docker, Kubernetes Nice To Have Qualifications Experience with Airflow, Docker, Visual Studio, Pycharm, Redis, Kubernetes, Fivetran, Spark, Dataflow, Dataproc, EMR Experience with database monitoring and diagnostic tools, preferably DataDog Hands-on experience with Kafka or other event streaming technologies. Hands-on experience with micro-service architecture Our Values Trust & Transparency | People First | Positive Experiences | Calm Persistence | Never Settling At ACV, we are committed to an inclusive culture in which every individual is welcomed and empowered to celebrate their true selves. We achieve this by fostering a work environment of acceptance and understanding that is free from discrimination. ACV is committed to being an equal opportunity employer regardless of sex, race, creed, color, religion, marital status, national origin, age, pregnancy, sexual orientation, gender, gender identity, gender expression, genetic information, disability, military status, status as a veteran, or any other protected characteristic. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you have a disability or special need that requires reasonable accommodation, please let us know. Data Processing Consent When you apply to a job on this site, the personal data contained in your application will be collected by ACV Auctions Inc. and/or one of its subsidiaries ("ACV Auctions"). By clicking "apply", you hereby provide your consent to ACV Auctions and/or its authorized agents to collect and process your personal data for purpose of your recruitment at ACV Auctions and processing your job application. ACV Auctions may use services provided by a third party service provider to help manage its recruitment and hiring process. For more information about how your personal data will be processed by ACV Auctions and any rights you may have, please review ACV Auctions' candidate privacy notice here. If you have any questions about our privacy practices, please contact datasubjectrights@acvauctions.com.
Posted 1 month ago
3.0 years
0 Lacs
Thiruvananthapuram, Kerala, India
On-site
Join SADA as a Data Engineer (ESS)! Your Mission As a Data Engineer at SADA, you will ensure our customers' support issues are handled effectively. You will work with highly skilled support engineers focused on providing Google Cloud Platform Data Engineering solutions, including BigQuery, Cloud SQL, Google Cloud Monitoring, and related Google services. The Data Engineer is responsible for providing technical assistance and guidance to team members and customers, updating knowledge articles, and enacting improvements to our ServiceNow incident management system, as well as being a SADA ambassador to our clients. Participating in on-call rotations, the Data Engineer must also be technically adept with Google products and be able to seamlessly and effectively partner with other SADA work groups, our partners, and our customers. SADA ESS delivers 24x7 support from a variety of locations around the world. This is primarily a customer-facing role. You will also work closely with SADA's Customer Experience team to execute on their recommendations to our customers. Pathway to Success Our motivation is to provide customers with an exceptional experience in migrating, developing, modernizing, and operationalizing their systems in Google Cloud Platform. Your success starts by positively impacting the direction of a fast-growing practice with vision and passion. You will be measured yearly by the breadth, magnitude, and quality of your contributions, your ability to estimate accurately, customer feedback at the close of projects, how well you collaborate with your peers, and the consultative polish you bring to customer interactions. As you continue to execute successfully, we will build a customized development plan together that leads you through the engineering or management growth tracks. Expectations Required Travel - 10% travel to customer sites, conferences, and other related events. Customer Facing - You will interact with customers on a regular basis, sometimes daily, other times weekly/bi-weekly. Onboarding/Training - Ongoing with first-week orientation followed by a 90-day onboarding schedule. Details of the timeline can be shared. Job Requirements Required Credentials: Google Professional Data Engineer Certified or able to complete within the first 45 days of employment A secondary Google Cloud certification in any other specialization Required Qualifications: 3+ years of experience writing software in at least two or more languages such as SQL, Python, Java, Scala, or Go Experience in supporting customers preferably in 24/7 environments Experience with systems monitoring/alerting, capacity planning, and performance tuning Experience with BI tools like Tableau, Looker etc will be an advantage Experience working with Google Support. Consultative mindset that delights the customer by building good rapport with them to fully understand their issues / requirements and provide accurate solutions Useful Qualifications: Experience working with Google Cloud data products (CloudSQL, Spanner, Cloud Storage, Pub/Sub, Dataflow, Dataproc, Bigtable, BigQuery, Dataprep, Composer, etc) Experience with IoT architectures and building real-time data streaming pipelines Experience operationalizing machine learning models on large datasets Demonstrated leadership and self-direction -- willingness to teach others and learn new techniques Understanding of principle of least privilege and security best practices Understanding of cryptocurrency and blockchain technology About SADA An Insight company Values: We built our core values on themes that internally compel us to deliver our best to our partners, our customers and to each other. Ensuring a diverse and inclusive workplace where we learn from each other is core to SADA's values. We welcome people of different backgrounds, experiences, abilities, and perspectives. We are an equal opportunity employer. Hunger Heart Harmony Work with the best: SADA has been the largest Google Cloud partner in North America since 2016 and, for the eighth year in a row, has been named a Google Global Partner of the Year. Business Performance: SADA has been named to the INC 5000 Fastest-Growing Private Companies list for 15 years in a row, garnering Honoree status. CRN has also named SADA on the Top 500 Global Solutions Providers list for the past 5 years. The overall culture continues to evolve with engineering at its core: 3200+ projects completed, 4000+ customers served, 10K+ workloads and 30M+ users migrated to the cloud. SADA India is committed to the safety of its employees and recommends that new hires receive a COVID vaccination before beginning work .
Posted 1 month ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing... As an Engineer II - Data Engineering in the Artificial Intelligence and Data Organization (AI&D), you will drive various activities including Data Engineering, data operations automation, data frameworks and platforms to improve the efficiency, customer experience, and profitability of the company. At Verizon, we are on a journey to industrialize our data science and AI capabilities. Very simply, this means that AI will fuel all decisions and business processes across the company. With our leadership in bringing the 5G network nationwide, the opportunity for AI will only grow exponentially in going from enabling billions of predictions to possibly trillions of predictions that are automated and real-time. Building high-quality Data Engineering applications, Design and implement data pipelines using Apache Airflow via Composer, Dataflow, and Dataproc for batch and streaming workloads. Develop and optimize SQL queries and data models in BigQuery to support downstream analytics and reporting. Automate data ingestion, transformation, and export processes across various GCP components using Cloud Functions and Cloud Run. Monitor and troubleshoot data workflows using Cloud Monitoring and Cloud Logging to ensure system reliability and performance. Collaborate with data analysts, scientists, and business stakeholders to gather requirements and deliver data-driven solutions. Ensure adherence to data security, quality, and governance best practices throughout the pipeline lifecycle. Support the deployment of production-ready data solutions and assist in performance tuning and scalability efforts. Debugging the production failures and identifying the solution Working on ETL/ELT development. What We’re Looking For... We are looking for a highly motivated and skilled Engineer II – Data Engineer with strong experience in Google Cloud Platform (GCP) to join our growing data engineering team. The ideal candidate will work on building and maintaining scalable data pipelines and cloud-native workflows using a wide range of GCP services such as Airflow (Composer), BigQuery, Dataflow, Dataproc, Cloud Functions, Cloud Run, Cloud Monitoring, and Cloud Logging. You'll Need To Have Bachelor's or one or more years of work experience. Two or more years of relevant work experience. Two or more years of relevant work experience in GCP. Hands-on experience with Google Cloud Platform (GCP) and services such as: Airflow (Composer) for workflow orchestration BigQuery for data warehousing and analytics Dataflow for scalable data processing Dataproc for Spark/Hadoop-based jobs Cloud Functions and Cloud Run for event-driven and container-based computing Cloud Monitoring and Logging for observability and alerting Proficiency in Python for scripting and pipeline development. Good understanding of SQL, data modelling, and data transformation best practices. Ability to troubleshoot complex data issues and optimize performance. Ability to effectively communicate through presentation, interpersonal, verbal and written skills. Strong communication skills, collaboration, problem-solving, analytical, and critical-thinking skills. Even better if you have one or more of the following: Master's degree in Computer Science, Information Systems and/or related technical discipline. Hands-on experience with AI/ML Models and Agentic AI building, tuning and deploying for Data Engineering applications. Big Data Analytics Certification in Google Cloud. Hands-on experience with Hadoop-based environments (HDFS, Hive, Spark, Dataproc). Knowledge of cost optimization techniques for cloud workloads. Knowledge of telecom architecture. If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing... As an Engineer II - Data Engineering in the Artificial Intelligence and Data Organization (AI&D), you will drive various activities including Data Engineering, data operations automation, data frameworks and platforms to improve the efficiency, customer experience, and profitability of the company. At Verizon, we are on a journey to industrialize our data science and AI capabilities. Very simply, this means that AI will fuel all decisions and business processes across the company. With our leadership in bringing the 5G network nationwide, the opportunity for AI will only grow exponentially in going from enabling billions of predictions to possibly trillions of predictions that are automated and real-time. Building high-quality Data Engineering applications, Design and implement data pipelines using Apache Airflow via Composer, Dataflow, and Dataproc for batch and streaming workloads. Develop and optimize SQL queries and data models in BigQuery to support downstream analytics and reporting. Automate data ingestion, transformation, and export processes across various GCP components using Cloud Functions and Cloud Run. Monitor and troubleshoot data workflows using Cloud Monitoring and Cloud Logging to ensure system reliability and performance. Collaborate with data analysts, scientists, and business stakeholders to gather requirements and deliver data-driven solutions. Ensure adherence to data security, quality, and governance best practices throughout the pipeline lifecycle. Support the deployment of production-ready data solutions and assist in performance tuning and scalability efforts. Debugging the production failures and identifying the solution Working on ETL/ELT development. What We’re Looking For... We are looking for a highly motivated and skilled Engineer II – Data Engineer with strong experience in Google Cloud Platform (GCP) to join our growing data engineering team. The ideal candidate will work on building and maintaining scalable data pipelines and cloud-native workflows using a wide range of GCP services such as Airflow (Composer), BigQuery, Dataflow, Dataproc, Cloud Functions, Cloud Run, Cloud Monitoring, and Cloud Logging. You'll Need To Have Bachelor's or one or more years of work experience. Two or more years of relevant work experience. Two or more years of relevant work experience in GCP. Hands-on experience with Google Cloud Platform (GCP) and services such as: Airflow (Composer) for workflow orchestration BigQuery for data warehousing and analytics Dataflow for scalable data processing Dataproc for Spark/Hadoop-based jobs Cloud Functions and Cloud Run for event-driven and container-based computing Cloud Monitoring and Logging for observability and alerting Proficiency in Python for scripting and pipeline development. Good understanding of SQL, data modelling, and data transformation best practices. Ability to troubleshoot complex data issues and optimize performance. Ability to effectively communicate through presentation, interpersonal, verbal and written skills. Strong communication skills, collaboration, problem-solving, analytical, and critical-thinking skills. Even better if you have one or more of the following: Master's degree in Computer Science, Information Systems and/or related technical discipline. Hands-on experience with AI/ML Models and Agentic AI building, tuning and deploying for Data Engineering applications. Big Data Analytics Certification in Google Cloud. Hands-on experience with Hadoop-based environments (HDFS, Hive, Spark, Dataproc). Knowledge of cost optimization techniques for cloud workloads. Knowledge of telecom architecture. If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.
Posted 1 month ago
5.0 years
7 - 8 Lacs
Hyderābād
On-site
JOB DESCRIPTION: We are seeking a skilled Data Engineer with over 5+ years of experience to design, build, and maintain scalable data pipelines and perform advanced data analysis to support business intelligence and data-driven decision-making. The ideal candidate will have a strong foundation in computer science principles, extensive experience with SQL and big data tools, and proficiency in cloud platforms and data visualization tools. Responsibilities Key Responsibilities: Exp: 5+ years Design, develop, and maintain robust, scalable ETL pipelines using Apache Airflow, DBT, Composer (GCP), Control-M, Cron, Luigi, and similar tools. Build and optimize data architectures including data lakes and data warehouses. Integrate data from multiple sources ensuring data quality and consistency. Collaborate with data scientists, analysts, and stakeholders to translate business requirements into technical solutions. Analyze complex datasets to identify trends, generate actionable insights, and support decision-making. Develop and maintain dashboards and reports using Tableau, Power BI, and Jupyter Notebooks for visualization and pipeline validation. Manage and optimize relational and NoSQL databases such as MySQL, PostgreSQL, Oracle, MongoDB, and DynamoDB. Work with big data tools and frameworks including Hadoop, Spark, Hive, Kafka, Informatica, Talend, SSIS, and Dataflow. Utilize cloud data services and warehouses like AWS Glue, GCP Dataflow, Azure Data Factory, Snowflake, Redshift, and BigQuery. Support CI/CD pipelines and DevOps workflows using Git, Docker, Terraform, and related tools. Ensure data governance, security, and compliance standards are met. Participate in Agile and DevOps processes to enhance data engineering workflows. Requirements Required Qualifications: 5+ years of professional experience in data engineering and data analysis roles. Strong proficiency in SQL and experience with database management systems such as MySQL, PostgreSQL, Oracle, and MongoDB. Hands-on experience with big data tools like Hadoop and Apache Spark. Proficient in Python programming. Experience with data visualization tools such as Tableau, Power BI, and Jupyter Notebooks. Proven ability to design, build, and maintain scalable ETL pipelines using tools like Apache Airflow, DBT, Composer (GCP), Control-M, Cron, and Luigi. Familiarity with data engineering tools including Hive, Kafka, Informatica, Talend, SSIS, and Dataflow. Experience working with cloud data warehouses and services (Snowflake, Redshift, BigQuery, AWS Glue, GCP Dataflow, Azure Data Factory). Understanding of data modeling concepts and data lake/data warehouse architectures. Experience supporting CI/CD practices with Git, Docker, Terraform, and DevOps workflows. Knowledge of both relational and NoSQL databases, including PostgreSQL, BigQuery, MongoDB, and DynamoDB. Exposure to Agile and DevOps methodologies. Experience with Amazon Web Services (S3, Glue, Redshift, Lambda, Athena) Nice to have Preferred Skills: Strong problem-solving and communication skills. Ability to work independently and collaboratively in a team environment. Experience with service development, REST APIs, and automation testing is a plus. Familiarity with version control systems and workflow automation. We offer Opportunity to work on bleeding-edge projects Work with a highly motivated and dedicated team Competitive salary Flexible schedule Benefits package - medical insurance, sports Corporate social events Professional development opportunities Well-equipped office About us Grid Dynamics (NASDAQ: GDYN) is a leading provider of technology consulting, platform and product engineering, AI, and advanced analytics services. Fusing technical vision with business acumen, we solve the most pressing technical challenges and enable positive business outcomes for enterprise companies undergoing business transformation. A key differentiator for Grid Dynamics is our 8 years of experience and leadership in enterprise AI, supported by profound expertise and ongoing investment in data, analytics, cloud & DevOps, application modernization and customer experience. Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the Americas, Europe, and India.
Posted 1 month ago
8.0 - 12.0 years
6 - 8 Lacs
Hyderābād
On-site
About the Role: Grade Level (for internal use): 12 The Team: As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will spearhead the design and delivery of robust, scalable ML infrastructure and pipelines that power natural language understanding, data extraction, information retrieval, and data sourcing solutions for S&P Global. You will define AI/ML engineering best practices, mentor fellow engineers and data scientists, and drive production-ready AI products from ideation through deployment. You’ll thrive in a (truly) global team that values thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and build solutions at enterprise scale Lead and grow a technically strong ML engineering function Collaborate on and solve high-complexity, high-impact problems Shape the engineering roadmap for emerging AI/ML capabilities (including GenAI integrations) Key Responsibilities: Architect, develop, and maintain production-ready data acquisition, transformation, and ML pipelines (batch & streaming) Serve as a hands-on lead-writing code, conducting reviews, and troubleshooting to extend and operate our data platforms Apply best practices in data modeling, ETL design, and pipeline orchestration using cloud-native solutions Establish CI/CD and MLOps workflows for model training, validation, deployment, monitoring, and rollback Integrate GenAI components-LLM inference endpoints, embedding stores, prompt services-into broader ML systems Mentor and guide engineers and data scientists; foster a culture of craftsmanship and continuous improvement Collaborate with cross-functional stakeholders (Data Science, Product, IT) to align on requirements, timelines, and SLAs What We’re Looking For: 8-12 years' professional software engineering experience with a strong MLOps focus Expert in Python and Apache for large-scale data processing Deep experience deploying and operating ML pipelines on AWS or GCP Hands-on proficiency with container/orchestration tooling Solid understanding of the full ML model lifecycle and CI/CD principles Skilled in streaming and batch ETL design (e.g., Airflow, Dataflow) Strong OOP design patterns, Test-Driven Development, and enterprise system architecture Advanced SQL skills (big-data variants a plus) and comfort with Linux/bash toolsets Familiarity with version control (Git, GitHub, or Azure DevOps) and code review processes Excellent problem-solving, debugging, and performance-tuning abilities Ability to communicate technical change clearly to non-technical audiences Nice to have: Redis, Celery, SQS and Lambda based event driven pipelines Prior work integrating LLM services (OpenAI, Anthropic, etc.) at scale Experience with Apache Avro and Apache Familiarity with Java and/or .NET Core (C#) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH103.2 - Middle Management Tier II (EEO Job Group) Job ID: 317386 Posted On: 2025-06-30 Location: Gurgaon, Haryana, India
Posted 1 month ago
0 years
0 Lacs
Hyderābād
Remote
Hyderabad, India Bangalore, India Chennai, India Job ID: R-1075056 Apply prior to the end date: August 31st, 2025 When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What you’ll be doing... As an Engineer II - Data Engineering in the Artificial Intelligence and Data Organization (AI&D), you will drive various activities including Data Engineering, data operations automation, data frameworks and platforms to improve the efficiency, customer experience, and profitability of the company. At Verizon, we are on a journey to industrialize our data science and AI capabilities. Very simply, this means that AI will fuel all decisions and business processes across the company. With our leadership in bringing the 5G network nationwide, the opportunity for AI will only grow exponentially in going from enabling billions of predictions to possibly trillions of predictions that are automated and real-time. Building high-quality Data Engineering applications, Design and implement data pipelines using Apache Airflow via Composer, Dataflow, and Dataproc for batch and streaming workloads. Develop and optimize SQL queries and data models in BigQuery to support downstream analytics and reporting. Automate data ingestion, transformation, and export processes across various GCP components using Cloud Functions and Cloud Run. Monitor and troubleshoot data workflows using Cloud Monitoring and Cloud Logging to ensure system reliability and performance. Collaborate with data analysts, scientists, and business stakeholders to gather requirements and deliver data-driven solutions. Ensure adherence to data security, quality, and governance best practices throughout the pipeline lifecycle. Support the deployment of production-ready data solutions and assist in performance tuning and scalability efforts. Debugging the production failures and identifying the solution Working on ETL/ELT development. What we’re looking for... We are looking for a highly motivated and skilled Engineer II – Data Engineer with strong experience in Google Cloud Platform (GCP) to join our growing data engineering team. The ideal candidate will work on building and maintaining scalable data pipelines and cloud-native workflows using a wide range of GCP services such as Airflow (Composer), BigQuery, Dataflow, Dataproc, Cloud Functions, Cloud Run, Cloud Monitoring, and Cloud Logging. You'll need to have: Bachelor's or one or more years of work experience. Two or more years of relevant work experience. Two or more years of relevant work experience in GCP. Hands-on experience with Google Cloud Platform (GCP) and services such as: Airflow (Composer) for workflow orchestration BigQuery for data warehousing and analytics Dataflow for scalable data processing Dataproc for Spark/Hadoop-based jobs Cloud Functions and Cloud Run for event-driven and container-based computing Cloud Monitoring and Logging for observability and alerting Proficiency in Python for scripting and pipeline development. Good understanding of SQL, data modelling, and data transformation best practices. Ability to troubleshoot complex data issues and optimize performance. Ability to effectively communicate through presentation, interpersonal, verbal and written skills. Strong communication skills, collaboration, problem-solving, analytical, and critical-thinking skills. Even better if you have one or more of the following: Master's degree in Computer Science, Information Systems and/or related technical discipline. Hands-on experience with AI/ML Models and Agentic AI building, tuning and deploying for Data Engineering applications. Big Data Analytics Certification in Google Cloud. Hands-on experience with Hadoop-based environments (HDFS, Hive, Spark, Dataproc). Knowledge of cost optimization techniques for cloud workloads. Knowledge of telecom architecture. If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Apply Now Save Saved Open sharing options Share Related Jobs Engineer III Consultant-Data Engineering Save Hyderabad, India Technology Princ Engr-Data Engineering Save Bangalore, India, +1 other location Technology Engr II-Data Engineering Save Chennai, India, +1 other location Technology Shaping the future. Connect with the best and brightest to help innovate and operate some of the world’s largest platforms and networks.
Posted 1 month ago
4.0 - 7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Designation : Senior Software Engineer Experience in Years : 4 to 7 years Job Location : Chennai (Hybrid) Skills - Required: Front-End: Proficiency in Angular; any other JavaScript experience would be a plus. Back-End: Strong hands-on experience in Java/Python, Microservices architecture, and API design & integration. Experience building scalable, high-performance systems. Solid understanding of database design; exposure to NoSQL databases is a plus. Familiarity with Unix/Linux environments, scripting, and associated tools. Strong problem-solving skills with experience designing system architecture for complex solutions. Knowledge of Object-Oriented Programming (OOP) and Design Patterns. Experience with Test-Driven Development (TDD) and Agile methodologies. Strong communication skills and experience with tools like Git. Self-starter with a strong drive to learn and mentor. Exposure to Google Cloud Platform (GCP) and services like BigQuery, DataProc, Dataflow is a big plus. Role & Responsibilities: Develop, enhance, modify, and maintain applications in the Global Markets environment. Design, code, test, debug, and document programs, while also supporting activities for the corporate systems architecture. Partner with business teams to define requirements for system applications. Create clear and comprehensive technical specifications and documentation. Maintain in-depth knowledge of current development tools, languages, and frameworks. Supervise and mentor a small team of associates, providing coaching and performance management input. Stay up to date on new technology trends and research best practices to achieve optimal results. Perform additional technical duties as required What’s in it for you? The experience of working in a category defining high growth startups in the transformational AI, Decision Science and Big Data Domain. The opportunity of getting on board to the phenomenal growth journey and helping the customers take the next big leap in digital transformation The opportunity to work with a diverse, lively and proactive group of techies who are constantly raising the bar on the art of translating mounds of data into tangible business value for clients.
Posted 1 month ago
0 years
0 Lacs
Gurgaon
On-site
Job Description: We are looking for a highly skilled Engineer with a solid experience of building Bigdata, GCP Cloud based real time data pipelines and REST APIs with Java frameworks. The Engineer will play a crucial role in designing, implementing, and optimizing data solutions to support our organization s data-driven initiatives. This role requires expertise in data engineering, strong problem-solving abilities, and a collaborative mindset to work effectively with various stakeholders. This role will be focused on the delivery of innovative solutions to satisfy the needs of our business. As an agile team we work closely with our business partners to understand what they require, and we strive to continuously improve as a team. Technical Skills 1. Core Data Engineering Skills Proficiency in using GCP s big data tools like BigQuery For data warehousing and SQL analytics. Dataproc: For running Spark and Hadoop clusters. GCP Dataflow For stream and batch data processing.(High level Idea) GCP Pub/Sub: For real-time messaging and event ingestion.(High level Idea) Expertise in building automated, scalable, and reliable pipelines using custom Python/Scala solutions or Cloud Data Functions . 2. Programming and Scripting Strong coding skills in SQL, and Java. Familiarity with APIs and SDKs for GCP services to build custom data solutions. 3. Cloud Infrastructure Understanding of GCP services such as Cloud Storage, Compute Engine, and Cloud Functions. Familiarity with Kubernetes (GKE) and containerization for deploying data pipelines. (Optional but Good to have) 4. DevOps and CI/CD Experience setting up CI/CD pipelines using Cloud Build, GitHub Actions, or other tools. Monitoring and logging tools like Cloud Monitoring and Cloud Logging for production workflows. 5. Backend Development (Spring Boot & Java) Design and develop RESTful APIs and microservices using Spring Boot. Implement business logic, security, authentication (JWT/OAuth), and database operations. Work with relational databases (MySQL, PostgreSQL, MongoDB, Cloud SQL). Optimize backend performance, scalability, and maintainability. Implement unit testing and integration testing Big Data ETL - Datawarehousing GCP Java RESTAPI CI/CD Kubernetes About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 month ago
8.0 - 12.0 years
6 - 9 Lacs
Gurgaon
On-site
About the Role: Grade Level (for internal use): 12 The Team: As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will spearhead the design and delivery of robust, scalable ML infrastructure and pipelines that power natural language understanding, data extraction, information retrieval, and data sourcing solutions for S&P Global. You will define AI/ML engineering best practices, mentor fellow engineers and data scientists, and drive production-ready AI products from ideation through deployment. You’ll thrive in a (truly) global team that values thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and build solutions at enterprise scale Lead and grow a technically strong ML engineering function Collaborate on and solve high-complexity, high-impact problems Shape the engineering roadmap for emerging AI/ML capabilities (including GenAI integrations) Key Responsibilities: Architect, develop, and maintain production-ready data acquisition, transformation, and ML pipelines (batch & streaming) Serve as a hands-on lead-writing code, conducting reviews, and troubleshooting to extend and operate our data platforms Apply best practices in data modeling, ETL design, and pipeline orchestration using cloud-native solutions Establish CI/CD and MLOps workflows for model training, validation, deployment, monitoring, and rollback Integrate GenAI components-LLM inference endpoints, embedding stores, prompt services-into broader ML systems Mentor and guide engineers and data scientists; foster a culture of craftsmanship and continuous improvement Collaborate with cross-functional stakeholders (Data Science, Product, IT) to align on requirements, timelines, and SLAs What We’re Looking For: 8-12 years' professional software engineering experience with a strong MLOps focus Expert in Python and Apache for large-scale data processing Deep experience deploying and operating ML pipelines on AWS or GCP Hands-on proficiency with container/orchestration tooling Solid understanding of the full ML model lifecycle and CI/CD principles Skilled in streaming and batch ETL design (e.g., Airflow, Dataflow) Strong OOP design patterns, Test-Driven Development, and enterprise system architecture Advanced SQL skills (big-data variants a plus) and comfort with Linux/bash toolsets Familiarity with version control (Git, GitHub, or Azure DevOps) and code review processes Excellent problem-solving, debugging, and performance-tuning abilities Ability to communicate technical change clearly to non-technical audiences Nice to have: Redis, Celery, SQS and Lambda based event driven pipelines Prior work integrating LLM services (OpenAI, Anthropic, etc.) at scale Experience with Apache Avro and Apache Familiarity with Java and/or .NET Core (C#) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH103.2 - Middle Management Tier II (EEO Job Group) Job ID: 317386 Posted On: 2025-06-30 Location: Gurgaon, Haryana, India
Posted 1 month ago
1.0 years
7 - 9 Lacs
Delhi
On-site
Job Title: Nursing/Health Care Assistant Location: Oman Employment Type: Full-Time (rotational shifts, weekend availability) Salary: 250 to 300 OMR per month Reports To: RNs / LPNs / Nurse Manager Job Summary We are seeking a compassionate and dedicated Nursing/Health Care Assistant to support our nursing and rehabilitation team in delivering exceptional patient care. Under the supervision of RNs/LPNs, you will assist with daily living activities, monitor vital signs, maintain hygiene and safety, support therapy sessions, manage feeding and incontinence, perform light housekeeping, and assist with admissions, transfers, and transportation. Key Responsibilities 1. Personal Care & Activities of Daily Living Assist patients with bathing, grooming, dressing, toileting, and incontinence care. Support mobility: transfers, ambulation, positioning, turning to prevent bedsores, and range-of-motion exercises. Provide tube feeding and feeding assistance when necessary. 2. Observation & Monitoring Measure and record vital signs (BP, pulse, temperature, respiration) and intake/output per shift. Observe and document changes in behaviour, mood, physical condition, or signs of distress/aggression, and report promptly. Assist in restraining patients as per rehabilitation protocols. 3. Therapeutic Support Aid physiotherapists and participate in group or individual therapy sessions. Escort patients in emergency and non-emergency situations within the facility or to outpatient (OPD) appointments and events. 4. Medical & Equipment Care Support light medical tasks under supervision (e.g., non‑sterile dressings, routine equipment/supply care). Perform inventory checks and ensure medical supplies/equipment are organized and functional. 5. Environment & Safety Ensure patient rooms are clean and hygienic: change linens, sanitize equipment, tidy rooms. Maintain infection control, follow health & safety protocols, and supervise patients to prevent falls or harm. 6. Admissions, Transfers & Documentation Assist with patient admissions, transfers, and discharges. Accurately record care activities, observations, vitals, feeding, and output in patient charts. 7. Emotional & Companionship Support Provide compassionate companionship, basic patient education, and emotional support. Qualifications & Skills ANM diploma (2‑year) or CNA/Healthcare Assistant certification. 1–3 years minimum healthcare or GNM/BSc or relevant qualification; 3+ years preferred. CPR/BLS certification advantageous. Valid Dataflow clearance (for international candidates). Strong interpersonal, communication, empathy, and confidentiality skills. Physically able to lift up to ~50 lbs, stand for long periods, and perform patient transfers. Working Hours & Benefits Schedule : Rotational shifts; weekend availability. Benefits : Free Joining Ticket (Will be reimbursed after the 3 months’ Probation period) 30 Days paid Annual leave after 1 year of service completion Yearly Up and Down Air Ticket Medical Insurance Life Insurance Accommodation (Chargeable up to OMR 20/-) Note: Interested candidates please call us at 97699 11050 or 99302 65888 , or email your CV to recruitment@thegrowthhive.org . Job Type: Full-time Pay: ₹60,000.00 - ₹80,000.00 per month Benefits: Cell phone reimbursement Food provided Health insurance Life insurance Provident Fund Schedule: Monday to Friday Rotational shift Weekend availability Work Location: In person
Posted 1 month ago
8.0 - 12.0 years
6 - 9 Lacs
Ahmedabad
On-site
About the Role: Grade Level (for internal use): 12 The Team: As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will spearhead the design and delivery of robust, scalable ML infrastructure and pipelines that power natural language understanding, data extraction, information retrieval, and data sourcing solutions for S&P Global. You will define AI/ML engineering best practices, mentor fellow engineers and data scientists, and drive production-ready AI products from ideation through deployment. You’ll thrive in a (truly) global team that values thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and build solutions at enterprise scale Lead and grow a technically strong ML engineering function Collaborate on and solve high-complexity, high-impact problems Shape the engineering roadmap for emerging AI/ML capabilities (including GenAI integrations) Key Responsibilities: Architect, develop, and maintain production-ready data acquisition, transformation, and ML pipelines (batch & streaming) Serve as a hands-on lead-writing code, conducting reviews, and troubleshooting to extend and operate our data platforms Apply best practices in data modeling, ETL design, and pipeline orchestration using cloud-native solutions Establish CI/CD and MLOps workflows for model training, validation, deployment, monitoring, and rollback Integrate GenAI components-LLM inference endpoints, embedding stores, prompt services-into broader ML systems Mentor and guide engineers and data scientists; foster a culture of craftsmanship and continuous improvement Collaborate with cross-functional stakeholders (Data Science, Product, IT) to align on requirements, timelines, and SLAs What We’re Looking For: 8-12 years' professional software engineering experience with a strong MLOps focus Expert in Python and Apache for large-scale data processing Deep experience deploying and operating ML pipelines on AWS or GCP Hands-on proficiency with container/orchestration tooling Solid understanding of the full ML model lifecycle and CI/CD principles Skilled in streaming and batch ETL design (e.g., Airflow, Dataflow) Strong OOP design patterns, Test-Driven Development, and enterprise system architecture Advanced SQL skills (big-data variants a plus) and comfort with Linux/bash toolsets Familiarity with version control (Git, GitHub, or Azure DevOps) and code review processes Excellent problem-solving, debugging, and performance-tuning abilities Ability to communicate technical change clearly to non-technical audiences Nice to have: Redis, Celery, SQS and Lambda based event driven pipelines Prior work integrating LLM services (OpenAI, Anthropic, etc.) at scale Experience with Apache Avro and Apache Familiarity with Java and/or .NET Core (C#) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH103.2 - Middle Management Tier II (EEO Job Group) Job ID: 317386 Posted On: 2025-06-30 Location: Gurgaon, Haryana, India
Posted 1 month ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Software Engineer . In this role, you will: Enhance & drive the overall product strategy, providing the vision and roadmap for the Data Analytics platform over Cloud journey and to help drive future requirements with reduced operational costs. Implementation of IT strategy to support core business objectives and gain business value. Become the ‘voice’ of the business within technology to ensure strategies are cohesive across all business streams. Identify interdependencies between various integrated teams and release plans. Accountable for identifying and resolving any alignment issues within the component teams delivered through the Global IT organization. Part of global team, consisting of 20+ resources across development and support. Creation & execution of plans to support training, adequate levels of resourcing to support the global demand Accountable for ensuring the products & services are delivered adhering to the approved architecture and solutions to meet the customer needs. Drive/Supporting technical design, change for new and existing data sources and manage support for delivering state of art intelligence infrastructure. Evolution of the DevOps model, ensuring continued improvement of the technology lifecycle and alignment with stakeholder plans Adhere to compliance with external regulatory requirements, internal control standards and group compliance policy. Maintains HSBC internal control standards, including timely implementation of internal and external audit points. Take accountability to work closely and build a trusted relationship with the business to ensure delivery of the benefits outlined by the respective strategy. Requirements To be successful in this role, you should meet the following requirements: Retail banking environment, with good understanding of customer lifecycle across core products 6+ years of Industry experience, solid exposure to managing/supporting product-based teams providing global services. Developing and maintaining ReactJS-based web applications: This includes creating new features, enhancing existing ones, and ensuring the overall functionality and user experience of the application. Writing clean, efficient, and reusable React components: This involves using best practices for component design, ensuring code readability, and creating components that can be used across multiple applications. Implementing state management: This involves using tools like Redux or Context API to manage the application's data and state efficiently. Ensuring cross-browser compatibility and mobile responsiveness: This means ensuring that the application looks and functions correctly across different browsers and devices. Optimizing application performance: This includes identifying and fixing performance bottlenecks, improving loading times, and ensuring a smooth user experience. Working closely with backend developers to integrate APIs: This involves collaborating with backend developers to define API endpoints, consume them in the frontend, and ensure seamless data flow. Following best practices in UI/UX design and front-end architecture: This involves understanding UI/UX principles, designing user-friendly interfaces, and structuring the codebase in a maintainable and scalable way. Staying updated with the latest ReactJS trends and features: This means continuously learning about new features and best practices in the ReactJS ecosystem. Performing unit testing and debugging for high-quality applications: This involves writing unit tests to ensure the quality of the code, debugging issues, and fixing bugs. Maintaining code quality, organization, and documentation: This involves writing clear and concise code, organizing the codebase, and documenting the code for future reference Skills : In-depth knowledge of JavaScript and ReactJS: This includes understanding core JavaScript concepts, React components, JSX, and state management. s Familiarity with other front-end technologies: This can include HTML, CSS, Bootstrap, and potentially other frameworks like Angular or VueJS. Experience with state management libraries: This can include Redux, Context API, or MobX. Understanding of front-end performance optimization techniques: This can include lazy loading, code splitting, and image optimization. Experience with version control systems (e.g., Git): This is essential for collaborating with other developers and managing the codebase. Good communication and collaboration skills: This is crucial for working with other developers, designers, and stakeholders. Problem-solving skills and ability to debug: This is essential for identifying and fixing issues in the codebase. Understanding of UI/UX design principles: This helps in creating user-friendly and intuitive interfaces. Ability to write clean, well-documented code: This makes the code easier to maintain and understand. Experience with front-end build tools (e.g., Webpack, Babel): These tools are used to automate tasks like bundling, transpiling, and minifying code. Strong proven experience in data migration projects over Cloud technologies like GCP/AWS, hands-on on Docker/Kubernetes Strong proven skills in Dataflow, Airflow, Big queries, Big Data Ecosystem including Hadoop and Cloud technologies. Strong knowledge of Data Warehousing, ETL, Analytics and Business Intelligence Reporting Experience of working in DevOps and Agile environment, strong knowledge, and experience of support tools like Jenkins, GIT, Nexus, Splunk, AppDynamics etc You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 1 month ago
12.0 years
0 Lacs
Maharashtra, India
On-site
Position : GCP Architect (Data Engineering) Key Result Areas: Architect modern D&A solutions using best of breed cloud services specifically from GCP aligned to client needs and drive implementation for successful delivery Demonstrate expertise for client success through delivery support and thought leadership cloud data architecture with focus on GCP data & analytics services. Contribute to business growth through presales support for GCP based solutions Research & experiment to address unmet needs through innovation Build & reuse knowledge, expertise & foundational components for cloud data architecture, data engineering specifically on GCP Grow & nurture technical talent within the Infocepts GCP community of practice Must-Have Skills Deep hands-on experience with BigQuery, Dataflow, Pub/Sub, Cloud Storage, and Composer. Proven ability to design enterprise-grade data lakes, warehouses, and real-time data systems. Strong command of Python and SQL for data engineering and automation tasks. Expertise in building and managing complex ETL/ELT pipelines using tools like Apache Beam or Airflow. Experience in leading teams, conducting code reviews, and engaging with senior stakeholders. Good-to-Have Skills Familiarity with Terraform or Deployment Manager for GCP resource provisioning. Experience with Kafka, Apache Beam, or similar technologies. Knowledge of data lineage, cataloging, encryption, and compliance frameworks (e.g., GDPR, HIPAA). Exposure to integrating data pipelines with ML models and Vertex AI. Understanding of Looker, Tableau, or Power BI for data consumption. Qualifications : Overall work experience of 12+ years with minimum of 3 to 6 years’ experience GCP related projects BS Degree in IT, MIS or business-related functional discipline Experience with or knowledge of Agile Software Development methodologies
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
```html About the Company Strong Experience in Big Data- Data Modelling, Design, Architecting & Solutioning. About the Role Understands programming language like SQL, Python, R-Scala. Responsibilities Good Python skills. Experience from data visualisation tools such as Google Data Studio or Power BI. Knowledge in A/B Testing, Statistics, Google Cloud Platform, Google Big Query, Agile Development, DevOps, Date Engineering, ETL Data Processing. Strong Migration experience of production Hadoop Cluster to Google Cloud. Qualifications Good To Have:- Required Skills Expert in Big Query, Dataproc, Data Fusion, Dataflow, Bigtable, Fire Store, CloudSQL, Cloud Spanner, Google Cloud Storage, Cloud Composer, Cloud Interconnect, Etc. Preferred Skills None specified. Pay range and compensation package Not specified. Equal Opportunity Statement Not specified. ```
Posted 1 month ago
3.0 years
5 - 7 Lacs
Delhi, India
On-site
Urgent Hiring for a Reputed Hospital in Oman Location : Oman Industry : Healthcare / Hospital Employment Type : Full-Time | Overseas Opportunity A reputed and well-established hospital in Oman is inviting applications from qualified and experienced healthcare professionals for immediate hiring. This is an excellent opportunity for nursing professionals seeking overseas placement with career growth, competitive salary, and attractive benefits. Nursing Assistants (M) Qualification: GNM or B.Sc. Nursing Experience: Minimum 3 years Mandatory: Positive Dataflow Report Salary: Up to OMR 300 Job Role : Assist registered nurses in delivering basic clinical care, maintaining hygiene, assisting in mobility, and monitoring patient vitals under supervision. Healthcare Assistant (M) Qualification: GNM or B.Sc. Nursing Experience: Minimum 3 years Mandatory: Positive Dataflow Report Salary: Up to OMR 275 Job Role : Provide support in daily patient care, record observations, and ensure hygiene and comfort of patients in the assigned wards or units. Nursing Assistants (F) Qualification: ANM (2-year course) from a State Council recognized institute Experience: Minimum 3 years Mandatory: Positive Dataflow Report Salary: Up to OMR 250 Job Role : Assist nursing staff with patient care activities, hygiene support, and basic monitoring under supervision in clinical and non-clinical settings. Healthcare Assistant (F) Qualification: ANM (1-year course) from a State Council recognized institute Experience: Minimum 3 years Mandatory: Positive Dataflow Report Salary: Up to OMR 230 Job Role : Help patients with mobility, maintain cleanliness, support nurses in non-clinical care, and ensure a safe and comforting environment for patients. Employee Benefits Free Joining Ticket (reimbursed after successful completion of 3-month probation) 30 Days Paid Annual Leave (after completion of 1 year of service) Yearly Round-Trip Air Ticket Medical Insurance Life Insurance Accommodation Provided (chargeable up to OMR 20/month) Additional Requirements Age preferably below 38 years Must hold a Positive Dataflow Report Excellent interpersonal and patient-handling skills Willingness to relocate and work flexible shifts in Oman All documents must be ready for licensing and visa processing How To Apply Interested candidates are requested to send the following documents: Updated CV Passport Copy Positive Dataflow Report Shortlisted candidates will be contacted for further interview and documentation process. Skills: vital signs monitoring,clinical support,mobility assistance,patient-handling skills,healthcare,hygiene maintenance,interpersonal skills,patient care,assistants,nurses
Posted 1 month ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Description We are looking for a highly skilled Engineer with a solid experience of building Bigdata, GCP Cloud based real time data pipelines and REST APIs with Java frameworks. The Engineer will play a crucial role in designing, implementing, and optimizing data solutions to support our organization s data-driven initiatives. This role requires expertise in data engineering, strong problem-solving abilities, and a collaborative mindset to work effectively with various stakeholders. This role will be focused on the delivery of innovative solutions to satisfy the needs of our business. As an agile team we work closely with our business partners to understand what they require, and we strive to continuously improve as a team. Technical Skills 1. Core Data Engineering Skills Proficiency in using GCP s big data tools like BigQuery For data warehousing and SQL analytics. Dataproc: For running Spark and Hadoop clusters. GCP Dataflow For stream and batch data processing.(High level Idea) GCP Pub/Sub: For real-time messaging and event ingestion.(High level Idea) Expertise in building automated, scalable, and reliable pipelines using custom Python/Scala solutions or Cloud Data Functions . Programming and Scripting Strong coding skills in SQL, and Java. Familiarity with APIs and SDKs for GCP services to build custom data solutions. Cloud Infrastructure Understanding of GCP services such as Cloud Storage, Compute Engine, and Cloud Functions. Familiarity with Kubernetes (GKE) and containerization for deploying data pipelines. (Optional but Good to have) DevOps and CI/CD Experience setting up CI/CD pipelines using Cloud Build, GitHub Actions, or other tools. Monitoring and logging tools like Cloud Monitoring and Cloud Logging for production workflows. Backend Development (Spring Boot & Java) Design and develop RESTful APIs and microservices using Spring Boot. Implement business logic, security, authentication (JWT/OAuth), and database operations. Work with relational databases (MySQL, PostgreSQL, MongoDB, Cloud SQL). Optimize backend performance, scalability, and maintainability. Implement unit testing and integration testing Big Data ETL - Datawarehousing GCP Java RESTAPI CI/CD Kubernetes
Posted 1 month ago
5.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
Job Title: Senior Data Engineer – Multi-Cloud (AWS, Azure, GCP) Location: Gurgaon, Haryana (Hybrid/Remote options available) Experience: 5+ years Employment Type: Full-time About The Role We are seeking a highly skilled and motivated Senior Data Engineer with hands-on experience across AWS, Azure, and GCP data ecosystems. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and architectures that support advanced analytics and real-time data processing. Key Responsibilities Technical Responsibilities Data Pipeline Development : Design and implement robust ETL/ELT pipelines using cloud-native tools. Cloud Expertise : AWS : EMR, Kinesis, Redshift, Glue Azure : HDInsight, Synapse Analytics, Stream Analytics GCP : Cloud Dataproc, Dataflow, Composer Data Modeling : Develop and optimize data models for analytics and reporting. Data Governance : Ensure data quality, security, and compliance across platforms. Automation & Orchestration : Use tools like Apache Airflow, AWS Step Functions, and GCP Composer for workflow orchestration. Monitoring & Optimization : Implement monitoring, logging, and performance tuning for data pipelines. Collaboration & Communication Work closely with data scientists, analysts, and business stakeholders to understand data needs. Translate business requirements into scalable technical solutions. Participate in code reviews, architecture discussions, and agile ceremonies. Required Qualifications Technical Skills Strong programming skills in Python, SQL, and optionally Scala or Java. Deep understanding of distributed computing, data warehousing, and stream processing. Experience with data lake architectures, data mesh, and real-time analytics. Proficiency in CI/CD practices and infrastructure as code (e.g., Terraform, CloudFormation). Certifications (Preferred) AWS Certified Data Analytics – Specialty Microsoft Certified: Azure Data Engineer Associate Google Professional Data Engineer Soft Skills & Attributes Analytical Thinking : Ability to break down complex problems and design scalable solutions. Communication : Strong verbal and written communication skills to explain technical concepts to non-technical stakeholders. Collaboration : Team player with a proactive attitude and the ability to work in cross-functional teams. Adaptability : Comfortable working in a fast-paced, evolving environment with shifting priorities. Ownership : High sense of accountability and a drive to deliver high-quality solutions.
Posted 1 month ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Major skillset: GCP , Pyspark , SQL , Python , Cloud Architecture , ETL , Automation 4+ years of Experience in Data Engineering, Management with strong focus on Spark for building production ready Data Pipelines. Experienced with analyzing large data sets from multiple data sources and build automated testing and validations. Knowledge of Hadoop eco-system and components like HDFS, Spark, Hive, Sqoop Strong Python experience Hands on SQL, HQL to write optimized queries Strong hands-on experience with GCP Big Query, Data Proc, Airflow DAG, Dataflow, GCS, Pub/sub, Secret Manager, Cloud Functions, Beams. Ability to work in fast passed collaborative environment work with various stakeholders to define strategic optimization initiatives. Deep understanding of distributed computing, memory turning and spark optimization. Familiar with CI/CD workflows, Git. Experience in designing modular, automated, and secure ETL frameworks.
Posted 1 month ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Major skillset: GCP , Pyspark , SQL , Python , Cloud Architecture , ETL , Automation 4+ years of Experience in Data Engineering, Management with strong focus on Spark for building production ready Data Pipelines. Experienced with analyzing large data sets from multiple data sources and build automated testing and validations. Knowledge of Hadoop eco-system and components like HDFS, Spark, Hive, Sqoop Strong Python experience Hands on SQL, HQL to write optimized queries Strong hands-on experience with GCP Big Query, Data Proc, Airflow DAG, Dataflow, GCS, Pub/sub, Secret Manager, Cloud Functions, Beams. Ability to work in fast passed collaborative environment work with various stakeholders to define strategic optimization initiatives. Deep understanding of distributed computing, memory turning and spark optimization. Familiar with CI/CD workflows, Git. Experience in designing modular, automated, and secure ETL frameworks.
Posted 1 month ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking a skilled Data Engineer with over 5+ years of experience to design, build, and maintain scalable data pipelines and perform advanced data analysis to support business intelligence and data-driven decision-making. The ideal candidate will have a strong foundation in computer science principles, extensive experience with SQL and big data tools, and proficiency in cloud platforms and data visualization tools. Key Responsibilities: Design, develop, and maintain robust, scalable ETL pipelines using Apache Airflow, DBT, Composer (GCP), Control-M, Cron, Luigi, and similar tools. Build and optimize data architectures including data lakes and data warehouses. Integrate data from multiple sources ensuring data quality and consistency. Collaborate with data scientists, analysts, and stakeholders to translate business requirements into technical solutions. Analyze complex datasets to identify trends, generate actionable insights, and support decision-making. Develop and maintain dashboards and reports using Tableau, Power BI, and Jupyter Notebooks for visualization and pipeline validation. Manage and optimize relational and NoSQL databases such as MySQL, PostgreSQL, Oracle, MongoDB, and DynamoDB. Work with big data tools and frameworks including Hadoop, Spark, Hive, Kafka, Informatica, Talend, SSIS, and Dataflow. Utilize cloud data services and warehouses like AWS Glue, GCP Dataflow, Azure Data Factory, Snowflake, Redshift, and BigQuery. Support CI/CD pipelines and DevOps workflows using Git, Docker, Terraform, and related tools. Ensure data governance, security, and compliance standards are met. Participate in Agile and DevOps processes to enhance data engineering workflows. Required Qualifications: 5+ years of professional experience in data engineering and data analysis roles. Strong proficiency in SQL and experience with database management systems such as MySQL, PostgreSQL, Oracle, and MongoDB. Hands-on experience with big data tools like Hadoop and Apache Spark. Proficient in Python programming. Experience with data visualization tools such as Tableau, Power BI, and Jupyter Notebooks. Proven ability to design, build, and maintain scalable ETL pipelines using tools like Apache Airflow, DBT, Composer (GCP), Control-M, Cron, and Luigi. Familiarity with data engineering tools including Hive, Kafka, Informatica, Talend, SSIS, and Dataflow. Experience working with cloud data warehouses and services (Snowflake, Redshift, BigQuery, AWS Glue, GCP Dataflow, Azure Data Factory). Understanding of data modeling concepts and data lake/data warehouse architectures. Experience supporting CI/CD practices with Git, Docker, Terraform, and DevOps workflows. Knowledge of both relational and NoSQL databases, including PostgreSQL, BigQuery, MongoDB, and DynamoDB. Exposure to Agile and DevOps methodologies. Experience with Amazon Web Services (S3, Glue, Redshift, Lambda, Athena) Preferred Skills: Strong problem-solving and communication skills. Ability to work independently and collaboratively in a team environment. Experience with service development, REST APIs, and automation testing is a plus. Familiarity with version control systems and workflow automation.
Posted 1 month ago
5.0 years
8 - 10 Lacs
Thiruvananthapuram
On-site
5 - 7 Years 1 Opening Trivandrum Role description We are recruiting Data Engineers with strong technical ability who can articulate well to non-tech audience, who will join our team on a permanent basis. Role: The Data Engineer will engage with external Clients and internal customers, understand their needs, and design, build, and maintain data pipelines and infrastructure using Google Cloud Platform (GCP). This will involve the design and implementation of scalable data architectures, ETL processes, and data warehousing solutions on GCP. The role requires expertise in big data technologies, cloud computing, and data integration, as well as the ability to optimize data systems for performance and reliability. This requires a blend of skills including programming, database management, cloud infrastructure, and data pipeline development. Additionally, problem-solving skills, attention to detail, and the ability to work in a fast-paced environment are valuable traits. You will frequently work as part of a scrum team, together with data scientists, ML engineers, and analyst developers, to design and implement robust data infrastructure that supports analytics and machine learning initiatives. Responsibilities: Design, build, and maintain scalable data pipelines and ETL processes using GCP services such as Cloud Dataflow, Cloud Dataproc, and BigQuery. Implement and optimize data storage solutions using GCP technologies like Cloud Storage, Cloud SQL, and Cloud Spanner. Develop and maintain data warehouses and data lakes on GCP, ensuring data quality, accessibility, and security. Collaborate with data scientists and analysts to understand data requirements and provide efficient data access solutions. Implement data governance and security measures to ensure compliance with regulations and best practices. Automate data workflows and implement monitoring and ing systems for data pipelines. Sharing data engineering knowledge with the wider functions and developing reusable data integration patterns and best practices. Skills/Experience: BSc/MSc in Computer Science, Information Systems, or related field, or equivalent work experience. Proven experience (6+ years) as a Data Engineer or similar role, preferably with GCP expertise. Strong proficiency in SQL and experience with NoSQL databases. Expertise in data modeling, ETL processes, and data warehousing concepts. Significant experience with GCP services such as BigQuery, Dataflow, Dataproc, Cloud Storage, and Pub/Sub. Proficiency in at least one programming language (e.g., Python, Java, or Scala) for data pipeline development. Experience with big data technologies such as Hadoop, Spark, and Kafka. Knowledge of data governance, security, and compliance best practices. GCP certifications (e.g., Professional Data Engineer) are highly advantageous. Effective communication skills to collaborate with cross-functional teams and explain technical concepts to non-technical stakeholders. Skills Bigquery,ETL,Data Management,Python About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 1 month ago
6.0 years
0 Lacs
Hyderābād
Remote
Core Technology: Machine Learning Level: 6+ Years Primary Skills: Google Vertex AI, Python Secondary Skills: ML Models, Gcp Open Positions: 2 Job Location: Hyderabad Work Mode: Remote Deployment Type: Full Time Job Description Primary Skills Required: Strong understanding of MLOps practices Hands-on experience in deploying and productionizing ML models Proficient in Python Experience with Google Vertex AI Solid knowledge of machine learning algorithms such as: XGBoost Classification models BigQuery ML (BQML) Key Responsibilities: Design, build, and maintain ML infrastructure on GCP using tools such as Vertex AI, GKE, Dataflow, BigQuery, and Cloud Functions. Develop and automate ML pipelines for model training, validation, deployment, and monitoring using tools like Kubeflow Pipelines, TFX, or Vertex AI Pipelines. Work with Data Scientists to productionize ML models and support experimentation workflows. Implement model monitoring and alerting for drift, performance degradation, and data quality issues. Manage and scale containerized ML workloads using Kubernetes (GKE) and Docker. Set up CI/CD workflows for ML using tools like Cloud Build, Bitbucket, Jenkins, or similar. Ensure proper security, versioning, and compliance across the ML lifecycle. Maintain documentation, artifacts, and reusable templates for reproducibility and auditability. Having GCP MLE Certification is Plus. Job Types: Full-time, Permanent, Fresher Schedule: Day shift Morning shift Application Question(s): Do you have the Hands-on experience in deploying and productionizing ML models? Are you proficient in python? Do you have experience with Google Vertex AI? Do you have Solid knowledge of machine learning algorithms such as: XGBoost Classification models BigQuery ML (BQML) ? Work Location: In person
Posted 1 month ago
5.0 years
0 Lacs
Delhi, India
Remote
About Apply Digital Apply Digital is a global experience transformation partner. We drive AI-powered change and measurable impact across complex, multi-brand ecosystems. Leveraging expertise that spans across the customer experience lifecycle from strategy, design to engineering and beyond, we enable our clients to modernize their organizations and maximize value for their business and customers. Our 750+ team members have helped transform global companies like Kraft Heinz, NFL, Moderna, Lululemon, Dropbox, Atlassian, A+E Networks, and The Very Group. Apply Digital was founded in 2016 in Vancouver, Canada. In the past nine years, we have grown to ten cities across North America, South America, the UK, Europe, and India. At Apply Digital, we believe in the “ One Team ” approach, where we operate within a ‘pod’ structure. Each pod brings together senior leadership, subject matter experts, and cross-functional skill sets, all working within a common tech and delivery framework. This structure is underpinned by well-oiled scrum and sprint cadences, keeping teams in step to release often and retrospectives to ensure we progress toward the desired outcomes. Wherever we work in the world, we envision Apply Digital as a safe, empowered, respectful and fun community for people, every single day. Together, we work to embody our SHAPE (smart, humble, active, positive, and excellent) values and make Apply Digital a space for our team to connect, grow, and support each other to make a difference. Visit our Careers page to learn how we can unlock your potential. LOCATION: Apply Digital is a hybrid friendly organization with remote options available if needed. The preferred candidate should be based in (or within a location commutable to) the Delhi/NCR region of India , working in hours that have an overlap with the Eastern Standard Timezone (EST). About The Client In your initial role, you will support Kraft Heinz, a global, multi-billion-dollar leader in consumer packaged foods and a valued client of ours for the past three years. Apply Digital has a bold and comprehensive mandate to drive Kraft Heinz’s digital transformation . Through implementable strategies, cutting-edge technology, and data-driven innovation we aim to enhance consumer engagement and maximize business value for Kraft Heinz. Our composable architecture, modern engineering practices, and deep expertise in AI, cloud computing, and customer data solutions have enabled game-changing digital experiences. Our cross-functional team has delivered significant milestones, including the launch of the What's Cooking App, the re-building of 120+ brand sites in over 20 languages, and most recently, the implementation of a robust Customer Data Platform (CDP) designed to drive media effectiveness. Our work has also been recognized internationally and has received multiple awards . While your work will start with supporting Kraft Heinz, you will also have future opportunities to collaborate with the global team on other international brands. THE ROLE: Are you passionate about building scalable data pipelines and optimizing data architectures? Do you thrive in a fast-paced environment where data-driven decision-making and real-time analytics are essential? Are you excited to collaborate with cross-functional teams to design and implement modern cloud-based data solutions? If so, you may be ready to take on the Senior Data Engineer role within our team. As a Senior Data Engineer, you will play a key role in designing, building, and maintaining cloud-native data pipelines and architectures to support our Composable digital platforms. You will collaborate with engineers, product teams, and analytics stakeholders to develop scalable, secure, and high-performance data solutions that power real-time analytics, reporting, and machine learning workloads. This role requires deep expertise in data engineering, cloud technologies (Google Cloud Platform - BigQuery, Lookers preferred), SQL, Python, and data pipeline orchestration tools (Dagster and DBT). WHAT YOU'LL DO: Design, build, and maintain scalable data pipelines and architectures to support analytical and operational workloads. Oversee the end-to-end management of the CDP platform, ensuring seamless integration across websites, mobile apps, CRM, adtech, and analytics systems to maintain data integrity and maximize activation potential. Collaborate with marketing, product, and data teams to enable real-time data activation and personalized customer experiences using unified CDP profiles. Build and maintain robust event instrumentation frameworks across digital properties to ensure accurate and consistent data capture for CDP ingestion and downstream use. Develop and optimize ETL/ELT pipelines, ensuring efficient data extraction, transformation, and loading from various sources. Work closely with backend and platform engineers to integrate data pipelines into cloud-native applications. Manage and optimize cloud data warehouses, primarily BigQuery, ensuring performance, scalability, and cost efficiency. Implement data governance, security, and privacy best practices, ensuring compliance with company policies and regulations. Collaborate with analytics teams to define data models and enable self-service reporting and BI capabilities. Collaborate with analytics teams to define scalable data models, maintain robust documentation (data dictionaries, lineage, metadata), and continuously monitor and optimize pipelines while staying current with evolving data engineering best practices. WHAT WE'RE LOOKING FOR: Strong proficiency in English (written and verbal communication) is required.Experience working with remote teams across North America and Latin America, ensuring smooth collaboration across time zones. 5+ years of experience in data engineering, with expertise in building scalable data pipelines and cloud-native data architectures. Proven hands-on experience implementing and managing CDPs like Twilio Segment (or similar CDPs), including event tracking plans, source/destination configuration, and identity resolution strategies. Deep understanding of MarTech ecosystems and how CDP data integrates with advertising platforms (Meta, Google Ads), CRM tools, and experimentation platforms for personalization and performance measurement. Strong proficiency in SQL for data modeling, transformation, and performance optimization. Experience with BI and data visualization tools (e.g., Looker, Tableau, or Google Data Studio).Expertise in Python for data processing, automation, and pipeline development. Extensive experience with cloud data platforms, especially Google Cloud (BigQuery, Cloud Storage, Pub/Sub), including hands-on implementation of ETL/ELT workflows using tools like DBT, Dataflow, or Apache Beam, and orchestration with Airflow, Dagster, or Cloud Workflows. Understanding of data privacy, security, and compliance best practices.Strong problem-solving skills, with the ability to debug and optimize complex data workflows.Excellent communication and collaboration skills. NICE TO HAVES: Experience with real-time data streaming solutions (e.g., Kafka, Pub/Sub, or Kinesis). Familiarity with machine learning workflows and MLOps best practices. Knowledge of Terraform for Infrastructure as Code (IaC) in data environments. Familiarity with data integrations involving Contentful, Algolia, Segment, and Talon.One . #Promoted LIFE AT APPLY DIGITAL At Apply Digital, people are at the core of everything we do . We value your time, safety, and health, and strive to build a work community that can help you thrive and grow. Here are a few benefits we offer to support you: Location: Apply Digital is a hybrid friendly organization with remote options available if needed. The preferred candidate should be based in (or within a location commutable to) Delhi/NCR, with the ability to overlap with the US/NA times zones when required. Comprehensive Benefits: benefit from private healthcare coverage, contributions to your Provident fund, and a gratuity bonus after five years of service. Vacation policy: work-life balance is key to our team’s success, so we offer flexible personal time offer (PTO); allowing ample time away from work to promote overall well-being. Great projects: broaden your skills on a range of engaging projects with international brands that have a global impact. An inclusive and safe environment: we’re truly committed to building a culture where you are celebrated and everyone feels welcome and safe. Learning opportunities: we offer generous training budgets, including partner tech certifications, custom learning plans, workshops, mentorship, and peer support. Apply Digital is committed to building a culture where differences are celebrated, and everyone feels welcome. That’s why we value equal opportunity and nurture an inclusive workplace where our individual differences are recognized and valued. For more information, visit our website’s Diversity, Equity, and Inclusion (DEI) page. If you have special needs or accommodations at this stage of the recruitment process, please inform us as soon as possible by emailing us at careers@applydigital.com .
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough