Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
12.0 - 16.0 years
0 Lacs
karnataka
On-site
Foundry Services (FS) is an independent foundry business established to meet customers" unique product needs. With the first Open System Foundry model globally, combined offerings include wafer fabrication, advanced process, packaging technology, chiplet, software, robust ecosystem, and assembly and test capabilities. This helps customers build innovative silicon designs and deliver customizable products from Intel's secure, resilient, and sustainable supply source. This job opportunity in FS will be part of the Customer Solutions Engineering (CSE) group, responsible for bringing the best of Intel technologies to FS customers, accelerating solutions from architecture to post-silicon validation. We are seeking an experienced Floorplan Engineer to focus on floor plan, die estimation, and power planning for high-performance designs. Responsibilities include establishing integration plans for die with optimization for package and board constraints, bump planning, die file generation, collaborating with architects for IP or SoC placement optimization, clocking and dataflow collaboration, deriving specifications for IP blocks, coordinating with power delivery team, maximizing die-per-reticle/good-die-per-wafer, RDL routing knowledge, and package integration before tape-out. **Qualifications:** - 12+ years of experience after a Bachelor or Master of Engineering degree in Electrical/Electronic/VLSI Engineering or related field. - Led multiple SOCs as SOC Floorplan lead, expertise in design planning, die estimation, knowledge of clocking, high-speed design signal routing, industry protocols, IP architecture, library/memory/technology/submicron issues. - Strong teamwork, flexibility, ability to thrive in a dynamic environment. **Job Type:** Experienced Hire **Shift:** Shift 1 (India) **Primary Location:** India, Bangalore Intel Foundry is committed to transforming the global semiconductor industry by providing cutting-edge silicon process and packaging technology. Innovating under Moore's Law, fostering collaboration, and investing in geographically diverse manufacturing capacities. Intel Foundry enables the world to deliver essential computing, server, mobile, networking, and automotive systems for the AI era. This position is part of the Foundry Services business unit within Intel Foundry, dedicated to customer success with full P&L responsibilities.,
Posted 3 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What You’ll Do Perform general application development activities, including unit testing, code deployment to development environment and technical documentation. Work on one or more projects, making contributions to unfamiliar code written by team members. Diagnose and resolve performance issues. Participate in the estimation process, use case specifications, reviews of test plans and test cases, requirements, and project planning. Document code/processes so that any other developer is able to dive in with minimal effort. Develop, and operate high scale applications from the backend to UI layer, focusing on operational excellence, security and scalability. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit engineering team employing agile software development practices. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Write, debug, and troubleshoot code in mainstream open source technologies Lead effort for Sprint deliverables, and solve problems with medium complexity Research, create, and develop software applications to extend and improve on Equifax Solutions Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What Experience You Need Bachelor's degree or equivalent experience 5+ years working experience software development using multiple versions of Python. Experience and familarity with the various Python frameworks currently in use to leverage software development processes. Develop, test, and deploy high-quality Python code for AI/ML applications, data pipelines, and backend services. Design, implement, and optimizem Machine Learning models and algorithms for various business problems. Collaborate with data scientists to transition experimental models into production-ready systems. Build and maintain robust data ingestion and processing pipelines to feed data into ML models. Perform code reviews, provide constructive feedback, and ensure adherence to best coding practices. Troubleshoot, debug, and optimize existing ML systems and applications for performance and scalability. Stay up-to-date with the latest advancements in Python, machine learning, and related technologies. Document technical designs, processes, and operational procedures. Experience with Cloud technology: GCP or AWS What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others. Source code control management systems (e.g. Git, Github). Agile environments (e.g. Scrum, XP). Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern Python versions
Posted 3 days ago
10.0 - 15.0 years
15 - 25 Lacs
Chennai, Bengaluru
Hybrid
Role Overview Were seeking a highly seasoned Solution Architect with deep expertise in Google Cloud Platform (GCP) and a proven track record in designing data and AI infrastructure tailored to AdTech use cases. You'll be pivotal in building scalable, performant, and privacycompliant systems to support realtime bidding, campaign analytics, customer segmentation, and AIdriven personalization. Key Responsibilities Architect and lead GCPnative solutions for AdTech: realtime bidding (RTB/OpenRTB), campaign analytics, lookalike modeling, and audience segmentation. Design highthroughput data pipelines, eventdriven architectures, and unified audience data lakes leveraging GCP services: BigQuery, Dataflow, Pub/Sub, Cloud Storage, Dataproc, Cloud Composer (Airflow), Dataplex , Vertex AI / AutoML , Cloud Functions , Cloud Run , GKE , Looker , and Apigee Collaborate with ad ops, marketing, and product stakeholders to translate business goals into architecture roadmaps, lead discovery workshops, solution assessments, and architecture reviews in presales and delivery cycles nexusitgroup.comSmartRecruiters. Integrate with thirdparty AdTech/MarTech platforms including DSPs, SSPs, CDPs, DMPs, ad exchanges , identity graphs, and consent/identity resolution systems (e.g. LiveRamp, The Trade Desk, Google Ads Data Hub). Ensure architecture aligns with GDPR, CCPA , IAB TCF and data privacy regulations—support consent management, anonymization, encryption, and access controls. Lead multidisciplinary technical teams (Data Engineering, MLOps, Analytics), enforce best practices in data governance, CI/CD, and MLOps (via Cloud Build , Terraform, Kubeflow/Vertex AI pipelines). Mentor engineers, run architecture reviews, define governance, cost optimization, security strategy and system observability. Conduct handson prototyping and PoCs to validate AI/ML capabilities, rapid experimentation before fullscale implementation Machine Learning Jobs. Tech Stack Expertise & Qualifications 15+ years in technical architecture, consulting, or senior engineering roles (preferably with majority in data & analytics); at least 5+ years handson with GCP architectures Indepth knowledge and handson experience of: GCP data and analytics stack: BigQuery, Dataflow, Dataproc, Pub/Sub, Cloud Composer, Dataplex, Cloud Storage AI/ML on GCP: Vertex AI, AI Platform, AutoML , model deployment, inference pipelines Compute frameworks: Cloud Functions , Cloud Run , GKE , Apigee Business intelligence and visualization: Looker Infrastructure as code: Terraform ; CI/CD pipelines: Cloud Build , Git-based workflows Skilled in Python and SQL ; familiarity with Java or Scala is a plus. Experience designing eventdriven architectures, streaming data pipelines, microservices , and APIbased integrations. Proven AdTech domain expertise: programmatic advertising, RTB/OpenRTB, identity resolution, cookieless frameworks, DMPs/CDPs data flows . Proven experience with data governance, encryption, IAM, PII anonymization , privacy-enhancing tech. Strong ability to code prototypes or PoCs to solve client challenges quickly, with high-quality architectural foundations Excellent communication skills, able to clearly present complex designs to both technical and nontechnical audiences.
Posted 3 days ago
5.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
As a Senior Data Engineer, you will architect, build, and maintain our data infrastructure that powers critical business decisions. You will work closely with data scientists, analysts, and product teams to design and implement scalable solutions for data processing, storage, and retrieval. Your work will directly impact our ability to leverage data for business intelligence, machine learning initiatives, and customer insights. Responsibilities Design, build, and maintain our end-to-end data infrastructure on AWS and GCP cloud platforms. Develop and optimize ETL/ELT pipelines to process large volumes of data from multiple sources. Build and support data pipelines for reporting, analytics, and machine learning applications. Implement and manage streaming data solutions using Kafka and other technologies. Design and optimize database schemas and data models in ClickHouse and other databases. Develop and maintain data workflows using Apache Airflow and similar orchestration tools. Write efficient, maintainable, and scalable code using PySpark and other data processing frameworks. Collaborate with data scientists to implement ML infrastructure for model training and deployment. Ensure data quality, reliability, and security across all data platforms. Monitor data pipelines and implement proactive alerting systems. Troubleshoot and resolve data infrastructure issues. Document data flows, architectures, and processes. Stay current with industry trends and emerging technologies in data engineering. Requirements Bachelor's degree in Computer Science, Engineering, or related technical field (Master's preferred). 5+ years of experience in data engineering roles. Strong expertise in AWS and/or GCP cloud platforms and services. Proficiency in building data pipelines using modern ETL/ELT tools and frameworks. Experience with stream processing technologies such as Kafka. Hands-on experience with ClickHouse or similar analytical databases. Strong programming skills in Python and experience with PySpark. Experience with workflow orchestration tools like Apache Airflow. Solid understanding of data modeling, data warehousing concepts, and dimensional modeling. Knowledge of SQL and NoSQL databases. Strong problem-solving skills and attention to detail. Excellent communication skills and ability to work in cross-functional teams. Experience in D2C, e-commerce, or retail industries. Knowledge of data visualization tools (Tableau, Looker, Power BI). Experience with real-time analytics solutions. Familiarity with CI/CD practices for data pipelines. Experience with containerization technologies (Docker, Kubernetes). Understanding of data governance and compliance requirements. Experience with MLOps or ML engineering Technologies. Cloud Platforms: AWS (S3 Redshift, EMR, Lambda), GCP (BigQuery, Dataflow, Dataproc). Data Processing: Apache Spark, PySpark, Python, SQL. Streaming: Apache Kafka, Kinesis. Data Storage: ClickHouse, S, 3 BigQuery, PostgreSQL, MongoDB. Orchestration: Apache Airflow. Version Control: Git. Containerization: Docker, Kubernetes (optional). This job was posted by Sidharth Patra from Traya Health.
Posted 3 days ago
7.0 years
0 Lacs
India
On-site
Job Title: Data Engineer – Google Cloud Platform (GCP) Locations: Udupi | Mangaluru | Bangalore | Mumbai Experience Required: 5–7 Years About the Role Are you a skilled Data Engineer with a passion for building scalable, high-performance data solutions on Google Cloud Platform (GCP) ? Join us to work on impactful, enterprise-level projects where you’ll design end-to-end data pipelines , optimize workflows, and help transform raw data into actionable business insights. Key Responsibilities Design and implement scalable ETL/ELT pipelines using Dataflow, Dataproc (PySpark) , and Cloud Composer (Airflow) . Work with BigQuery for schema design, performance tuning, and SQL optimization . Build and maintain data models (Star and Snowflake schema) for analytics and reporting. Develop Python and SQL-based scripts for transformations and automation. Integrate data from diverse sources while ensuring data quality and governance . Collaborate with data scientists, analysts, and product teams to deliver business-driven solutions. Ensure security, scalability, and compliance across all data systems. (Bonus) Utilize Cloud Functions and Pub/Sub for event-driven data processing. Must-Have Skills 5+ years of hands-on experience with GCP-based Data Engineering. BigQuery expertise (mandatory). Strong knowledge of ETL/ELT pipeline development using GCP tools. Proficiency in Python and SQL . Solid understanding of data modeling and warehouse concepts .
Posted 3 days ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About the Job The Director Data Engineering will lead the development and implementation of a comprehensive data strategy that aligns with the organization’s business goals and enables data driven decision making. Roles and Responsibilitie s Build and manage a team of talented data managers and engineers with the ability to not only keep up with, but also pioneer, in this space Collaborate with and influence leadership to directly impact company strategy and direction Develop new techniques and data pipelines that will enable various insights for internal and external customers Develop deep partnerships with client implementation teams, engineering and product teams to deliver on major cross-functional measurements and testing Communicate effectively to all levels of the organization, including executives Provide success in partnering teams with dramatically varying backgrounds, from the highly technical to the highly creative Design a data engineering roadmap and execute the vision behind it Hire, lead, and mentor a world-class data team Partner with other business areas to co-author and co-drive strategies on our shared roadmap Oversee the movement of large amounts of data into our data lake Establish a customer-centric approach and synthesize customer needs Own end-to-end pipelines and destinations for the transfer and storage of all data Manage 3rd-party resources and critical data integration vendors Promote a culture that drives autonomy, responsibility, perfection and mastery. Maintain and optimize software and cloud expenses to meet financial goals of the company Provide technical leadership to the team in design and architecture of data products and drive change across process, practices, and technology within the organization Work with engineering managers and functional leads to set direction and ambitious goals for the Engineering department Ensure data quality, security, and accessibility across the organization Skills You Will Need 10+ years of experience in data engineering 5+ years of experience leading data teams of 30+ resources or more, including selection of talent planning / allocating resources across multiple geographies and functions. 5+ years of experience with GCP tools and technologies, specifically, Google BigQuery, Google cloud composer, Dataflow, Dataform, etc. Experience creating large-scale data engineering pipelines, data-based decision-making and quantitative analysis tools and software Experience with hands-on to code version control systems (git) Experience with CICD, data architectures, pipelines, quality, and code management Experience with complex, high volume, multi-dimensional data, based on unstructured, structured, and streaming datasets Experience with SQL and NoSQL databases Experience creating, testing, and supporting production software and systems Proven track record of identifying and resolving performance bottlenecks for production systems Experience designing and developing data lake, data warehouse, ETL and task orchestrating systems Strong leadership, communication, time management and interpersonal skills Proven architectural skills in data engineering Experience leading teams developing production-grade data pipelines on large datasets Experience designing a large data lake and lake house experience, managing data flows that integrate information from various sources into a common pool implementing data pipelines based on the ETL model Experience with common data languages (e.g. Python, Scala) and data warehouses (e.g. Redshift, BigQuery, Snowflake, Databricks) Extensive experience on cloud tools and technologies - GCP preferred Experience managing real-time data pipelines Successful track record and demonstrated thought-leadership and cross-functional influence and partnership within an agile / water-fall development environment. Experience in regulated industries or with compliance frameworks (e.g., SOC 2, ISO 27001). Nice to have: HR services industry experience Experience in data science, including predictive modeling Experience leading teams across multiple geographies
Posted 3 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Immediate #HIRING for a highly motivated and experienced GCP Data Engineer to join our growing team. We’re a leading software company specializing in Artificial Intelligence, Machine Learning, Data Analytics, Innovative data solutions, Cloud-based technologies If you're passionate about building robust applications and thrive in a dynamic environment, please share your resume a t rizwana@randomtrees.com Job Title: GCP Data Engineer Experience : 4 Yrs - 8Yrs Notice: Immediate Location: Hyderabad/ Chennai - Hybrid Mode Job Type: Full-time Employment Job Description: We are looking for an experienced GCP Data Engineer to design, develop, and optimize data pipelines and solutions on Google Cloud Platform (GCP) . The ideal candidate should have hands-on experience with BigQuery, DataFlow, PySpark, GCS, and Airflow (Cloud Composer) , along with strong expertise or knowledge in DBT. Key Responsibilities: Design and develop scalable ETL/ELT data pipelines using DataFlow (Apache Beam), PySpark, and Airflow (Cloud Composer) . Work extensively with BigQuery for data transformation, storage, and analytics. Implement data ingestion, processing, and transformation workflows using GCP-native services. Optimize and troubleshoot performance issues in BigQuery and DataFlow pipelines. Manage data storage and governance using Google Cloud Storage (GCS) and other GCP services. Ensure data quality, security, and compliance with industry standards. Work closely with data scientists, analysts, and business teams to provide data solutions. Automate workflows, monitor jobs, and improve pipeline efficiency. Required Skills: ✔ Google Cloud Platform (GCP) Data Engineering (GCP DE Certification preferred) DBT knowledge or experience is mandate ✔ BigQuery – Data modeling, query optimization, and performance tuning ✔ PySpark – Data processing and transformation ✔ GCS (Google Cloud Storage) – Data storage and management ✔ Airflow / Cloud Composer – Workflow orchestration and scheduling ✔ SQL & Python – Strong hands-on experience ✔ Experience with CI/CD pipelines, Terraform, or Infrastructure as Code (IaC) is a plus.
Posted 4 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
GCP Data Engineer Job Location : CHENNAI Experience: 5 to 7 years Open Positions:2 Job Description:Role: GCP Data Engineer (Mid Level) Position Overview: We are seeking a mid-level GCP Data Engineer with 4+ years of experience in ETL , Data Warehousing, and Data Engineering. The ideal candidate will have hands-on experience with GCP tools, solid data analysis skills, and a strong understanding of Data Warehousing principles . Qualifications: 4+ years of experience in ETL & Data Warehousing Should have excellent leadership & communication skills Should have experience in developing Data Engineering solutions Airflow , GCP BigQuery , Cloud Storage, Dataflow, Cloud Functions, Pub/Sub, Cloud Run , etc. ? Should have built solution automations in any of the above ETL tools Should have executed at least 2 GCP Cloud Data Warehousing projects Should have worked at least 2 projects using Agile/SAFe methodology Should Have mid level experience in Pyspark and Teradata Should Have mid level experience in Should have working experience on any DevOps tools like GitHub, Jenkins , Cloud Native , etc & on semi-structured data formats like JSON, Parquet and/or XML files & written complex SQL queries for data analysis and extraction Should have in depth understanding on Data Warehousing, Data Analysis, Data Profiling, Data Quality & Data Mapping Education: B.Tech. /B.E. in Computer Science or related field. Certifications : Google Cloud Professional Data Engineer Certification. Roles & Responsibilities : Responsibilities : Analyze the different source systems , profile data , understand, document & fix Data Quality issues Gather requirements and business process knowledge in order to transform the data in a way that is geared towards the needs of end users Write complex SQLs to extract & format source data for ETL/data pipeline Create design documents, Source to Target Mapping documents and any supporting documents needed for deployment/migration ? 2 Design, Develop and Test ETL/Data pipelines Design & build metadata-based frameworks needs for data pipelines Write Unit Test cases, execute Unit Testing and document Unit Test results Deploy ETL/Data pipelines Use DevOps tools to version, push/pull code and deploy across environments Support team during troubleshooting & debugging defects & bug fixes , business requests, environment migrations & other adhoc requests ? ? Do production support, enhancements and bug fixes Work with business and technology stakeholders to communicate EDW incidents/problems and manage their expectations Leverage ITIL concepts to circumvent incidents, manage problems and document knowledge Perform data cleaning , transformation, and validation to ensure accuracy and consistency across various data sources · · Stay current on industry best practices and emerging technologies in data analysis and cloud computing, particularly within the GCP ecosystem
Posted 4 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: GCP Data Engineer Location: Chennai 34350 Type: Contract Budget: Up to ₹18 LPA Notice Period: Immediate joiners preferred 🧾 Job Description We are seeking an experienced Google Cloud Platform (GCP) Data Engineer to join our team in Chennai. This role is centered on designing and building cloud-based data solutions that support AI/ML, analytics, and business intelligence use cases. You will develop scalable and high-performance pipelines, integrate and transform data from various sources, and support both real-time and batch data needs. 🛠️ Key Responsibilities Design and implement scalable batch and real-time data pipelines using GCP services such as BigQuery, Dataflow, Dataform, Cloud Composer (Airflow), Data Fusion, Dataproc, Cloud SQL, Compute Engine, and others. Build data products that combine historical and live data for business insights and analytical applications. Lead efforts in data transformation, ingestion, integration, data mart creation, and activation of data assets. Collaborate with cross-functional teams including AI/ML, analytics, DevOps, and product teams to deliver robust cloud-native solutions. Optimize pipelines for performance, reliability, and cost-effectiveness. Contribute to data governance, quality assurance, and security best practices. Drive innovation by integrating AI/ML features, maintaining strong documentation, and applying continuous improvement strategies. Provide production support, troubleshoot failures, and meet SLAs using GCP’s monitoring tools. Work within an Agile environment, follow CI/CD practices, and apply test-driven development (TDD). ✅ Skills Required Strong experience in: BigQuery, Dataflow, Dataform, Data Fusion, Cloud SQL, Compute Engine, Dataproc, Airflow (Cloud Composer), Cloud Functions, Cloud Run Programming experience with Python, Java, PySpark, or Apache Beam Proficient in SQL (5+ years) for complex data handling Hands-on with Terraform, Tekton, Cloud Build, GitHub, Docker Familiarity with Apache Kafka, Pub/Sub, Kubernetes GCP Certified (Associate or Professional Data Engineer) ⭐ Skills Preferred Deep knowledge of cloud architecture and infrastructure-as-code tools Experience in data security, regulatory compliance, and data governance Experience with AI/ML solutions or platforms Understanding of DevOps pipelines, CI/CD using Cloud Build, and containerization Exposure to financial services data or similar regulated environments Experience in mentoring and leading engineering teams Tools: JIRA, Artifact Registry, App Engine 🎓 Education Required: Bachelor's Degree (in Computer Science, Engineering, or related field) Preferred: Master’s Degree 📌 Additional Details Role Type: Contract-based Work Location: Chennai, Onsite Target Candidates: Mid to Senior level with minimum 5+ years of data engineering experience Skills: gcp,apache,pyspark,data,docker
Posted 4 days ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role : Big Data Developer Location : Chennai Experience : 7+ years Work Mode : Work from Office Key Skills Required Google Cloud Platform (GCP) BigQuery (BQ) Dataflow Dataproc Cloud Spanner Strong knowledge of distributed systems, data processing frameworks, and big data architecture. Proficiency in programming languages like Python, Java, or Scala. Roles And Responsibilities BigQuery (BQ): Design and develop scalable data warehouses using BigQuery. Optimize SQL queries for performance and cost-efficiency in BigQuery. Implement data partitioning and clustering strategies. Dataflow: Build and maintain batch and streaming data pipelines using Apache Beam on GCP Dataflow. Ensure data transformation, enrichment, and cleansing as per business needs. Monitor and troubleshoot pipeline performance issues. Dataproc: Develop and manage Spark and Hadoop jobs on GCP Dataproc. Perform ETL/ELT operations using PySpark, Hive, or other tools. Automate and orchestrate jobs for scheduled data workflows. Cloud Spanner: Design and manage globally distributed, scalable transactional databases using Cloud Spanner. Optimize schema and query design for performance and reliability. Implement high availability and disaster recovery strategies. General Responsibilities: Collaborate with data architects, analysts, and business stakeholders to understand data requirements. Implement data quality and data governance best practices. Ensure security and compliance with GCP data handling standards. Participate in code reviews, CI/CD deployments, and Agile development cycles.
Posted 4 days ago
1.0 - 5.0 years
0 Lacs
chennai, tamil nadu
On-site
As a skilled Data Engineer, you will leverage your expertise to contribute to the development of data modeling, ETL processes, and reporting systems. With over 3 years of hands-on experience in areas such as ETL, Big Query, SQL, Python, or Alteryx, you will play a crucial role in enhancing data engineering processes. Your advanced knowledge of SQL programming and database management will be key in ensuring the efficiency of data operations. In this role, you will utilize your solid experience with Business Intelligence reporting tools like Power BI, Qlik Sense, Looker, or Tableau to create insightful reports and analytics. Your understanding of data warehousing concepts and best practices will enable you to design robust data solutions. Your problem-solving skills and attention to detail will be instrumental in addressing data quality issues and proposing effective BI solutions. Collaboration and communication are essential aspects of this role, as you will work closely with stakeholders to define requirements and develop data-driven insights. Your ability to work both independently and as part of a team will be crucial in ensuring the successful delivery of projects. Additionally, your proactive approach to learning new tools and techniques will help you stay ahead in a dynamic environment. Preferred skills include experience with GCP cloud services, Python, Hive, Spark, Scala, JavaScript, and various BI/reporting tools. Your strong oral, written, and interpersonal communication skills will enable you to effectively convey insights and solutions to stakeholders. A Bachelor's degree in Computer Science, Computer Information Systems, or a related field is required for this role. Overall, as a Data Engineer, you will play a vital role in developing and maintaining data pipelines, reporting systems, and dashboards. Your expertise in SQL, BI tools, and data validation will contribute to ensuring data accuracy and integrity across all systems. Your analytical mindset and ability to perform root cause analysis will be key in identifying opportunities for improvement and driving data-driven decision-making within the organization.,
Posted 4 days ago
0.0 - 18.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Job Information Date Opened 07/28/2025 Job Type Full time Work Experience 10-18 years Industry Technology Number of Positions 1 City Chennai State/Province Tamil Nadu Country India Zip/Postal Code 600086 About Us Why a career in Zuci is unique! Constant attention is the source of our perfection. We fundamentally believe that building a career is all about consistency. If you jog or walk for a few days, it won’t bring in big results. If you do the right things every day for hundreds of days, you'll become lighter, more flexible, and you'll start enjoying your work and life more. Our customers trust us because of our unwavering consistency. Enabling us to deliver high-quality work and thereby give our customers and Team Zuci the best shot at extraordinary outcomes. Do you see the big picture? Is Digital Engineering your forte? Job Description Solution Architect – Data & AI (GCP + AdTech Focus) Experience : 15+ Years Employment Type: Full Time Role Overview: We are seeking a highly experienced Solution Architect with deep expertise in Google Cloud Platform (GCP) and a proven track record in architecting data and AI solutions for the AdTech industry. This role will be pivotal in designing scalable, real-time, and privacy-compliant solutions for programmatic advertising, customer analytics, and AI-driven personalization. The ideal candidate should blend strong technical architecture capabilities with deep domain expertise in advertising technology and digital marketing ecosystems. Key Responsibilities: Architect and lead GCP-native data and AI solutions tailored to AdTech use cases—such as real-time bidding, campaign analytics, customer segmentation, and look alike modeling. Design high-throughput data pipelines, audience data lakes, and analytics platforms leveraging GCP services like BigQuery, Dataflow, Pub/Sub, Cloud Storage, Vertex AI, etc. Collaborate with ad operations, marketing teams, and digital product owners to understand business goals and translate them into scalable and performant solutions. Integrate with third-party AdTech and MarTech platforms, including DSPs, SSPs, CDPs, DMPs, ad exchanges, and identity resolution systems. Ensure architectural alignment with data privacy regulations (GDPR, CCPA) and support consent management and data anonymization strategies. Drive technical leadership across multi-disciplinary teams (Data Engineering, MLOps, Analytics) and enforce best practices in data governance, model deployment, and cloud optimization. Lead discovery workshops, solution assessments, and architecture reviews during pre-sales and delivery cycles. GCP & AdTech Tech Stack Expertise: BigQuery, Cloud Pub/Sub, Dataflow, Dataproc, Cloud Composer (Airflow), Vertex AI, AI Platform, AutoML, Cloud Functions, Cloud Run, Looker, Apigee, Dataplex, GKE Deep understanding of programmatic advertising (RTB, OpenRTB), cookie-less identity frameworks, and AdTech/MarTech data flows. Experience integrating or building components like: Data Management Platforms (DMPs) Customer Data Platforms (CDPs) Demand-Side Platforms (DSPs) Ad servers, attribution engines, and real-time bidding pipelines Event-driven and microservices architecture using APIs, streaming pipelines, and edge delivery networks. Integration with platforms like Google Marketing Platform, Google Ads Data Hub, Snowplow, Segment, or similar. Strong understanding of IAM, data encryption, PII anonymization, and regulatory compliance (GDPR, CCPA, HIPAA if applicable). Experience with CI/CD pipelines (Cloud Build), Infrastructure as Code (Terraform), and MLOps pipelines using Vertex AI or Kubeflow. Strong experience in Python and SQL; familiarity with Scala or Java is a plus. Experience with version control (Git), Agile delivery, and architectural documentation tools.
Posted 4 days ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
As a Data Warehouse (DWH) professional with relevant experience in Google Cloud Platform (GCP), you will be responsible for developing and implementing robust data architectures. This includes designing data lakes, data warehouses, and data marts by utilizing GCP services such as BigQuery, Dataflow, DataProc, and Cloud Storage. Your role will involve designing and implementing data models that meet business requirements while ensuring data integrity, consistency, and accessibility. Your deep understanding of GCP services and best practices for data warehousing, data analytics, and machine learning will be crucial in this role. You will also be tasked with planning and executing data migration strategies from on-premises or other cloud environments to GCP. Optimizing data pipelines and query performance to facilitate efficient data processing and analysis will be a key focus area. Additionally, your proven experience in managing teams and project delivery will be essential for success in this position. Collaborating closely with stakeholders to comprehend their requirements and deliver effective solutions will be a significant part of your responsibilities. Any experience with Looker will be considered advantageous for this role.,
Posted 4 days ago
4.0 - 8.0 years
0 Lacs
noida, uttar pradesh
On-site
At TELUS Digital, you will play a crucial role in enabling customer experience innovation by fostering spirited teamwork, embracing agile thinking, and embodying a caring culture that prioritizes customers. As the global arm of TELUS Corporation, a leading telecommunications service provider in Canada, we specialize in delivering contact center and business process outsourcing solutions to major corporations across various sectors such as consumer electronics, finance, telecommunications, and utilities. With our extensive global call center capabilities, we offer secure infrastructure, competitive pricing, skilled resources, and exceptional customer service, all supported by TELUS, our multi-billion dollar parent company. In this role, you will leverage your expertise in Data Engineering, backed by a minimum of 4 years of industry experience, to drive the success of our projects. Proficiency in Google Cloud Platform (GCP) services including Dataflow, BigQuery, Cloud Storage, and Pub/Sub is essential for effectively managing data pipelines and ETL processes. Your strong command over the Python programming language will be instrumental in performing data processing tasks efficiently. You will be responsible for optimizing data pipeline architectures, enhancing performance, and ensuring reliability through your software engineering skills. Your ability to troubleshoot and resolve complex pipeline issues, automate repetitive tasks, and monitor data pipelines for efficiency and reliability will be critical in maintaining operational excellence. Additionally, your familiarity with SQL, relational databases, and version control systems like Git will be beneficial in streamlining data management processes. As part of the team, you will collaborate closely with stakeholders to analyze, test, and enhance the reliability of GCP data pipelines, Informatica ETL workflows, MDM, and Control-M jobs. Your commitment to continuous improvement, SLA adherence, and post-incident reviews will drive the evolution of our data pipeline systems. Excellent communication, problem-solving, and analytical skills are essential for effectively documenting processes, providing insights, and ensuring seamless operations. This role offers a dynamic environment where you will have the opportunity to work in a 24x7 shift, contributing to the success of our global operations and making a meaningful impact on customer experience.,
Posted 4 days ago
9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Position Summary: We are seeking an experienced Data Architect to design, implement, and optimize scalable data solutions on Amazon Web Services (AWS) and / or Google Cloud Platform (GCP). The ideal candidate will lead the development of enterprise-grade data architectures that support analytics, machine learning, and business intelligence initiatives while ensuring security, performance, and cost optimization. Who we are looking for: Primary Responsibilities: Key Responsibilities Architecture & Design: Design and implement comprehensive data architectures using AWS or GCP services Develop data models, schemas, and integration patterns for structured and unstructured data Create solution blueprints, technical documentation, architectural diagrams, and best practice guidelines Implement data governance frameworks and ensure compliance with security standards Design disaster recovery and business continuity strategies for data systems Technical Leadership: Lead cross-functional teams in implementing data solutions and migrations Provide technical guidance on cloud data services selection and optimization Collaborate with stakeholders to translate business requirements into technical solutions Drive adoption of cloud-native data technologies and modern data practices Platform Implementation: Implement data pipelines using cloud-native services (AWS Glue, Google Dataflow, etc.) Configure and optimize data lakes and data warehouses (S3 / Redshift, GCS / BigQuery) Set up real-time streaming data processing solutions (Kafka, Airflow, Pub / Sub) Implement automated data quality monitoring and validation processes Establish CI/CD pipelines for data infrastructure deployment Performance & Optimization: Monitor and optimize data pipeline performance and cost efficiency Implement data partitioning, indexing, and compression strategies Conduct capacity planning and scaling recommendations Troubleshoot complex data processing issues and performance bottlenecks Establish monitoring, alerting, and logging for data systems Skill: Bachelor’s degree in computer science, Data Engineering, or related field 9+ years of experience in data architecture and engineering 5+ years of hands-on experience with AWS or GCP data services Experience with large-scale data processing and analytics platforms AWS Redshift, S3, Glue, EMR, Kinesis, Lambda AWS Data Pipeline, Step Functions, CloudFormation Big Query, Cloud Storage, Dataflow, Dataproc, Pub/Sub GCP Cloud Functions, Cloud Composer, Deployment Manager IAM, VPC, and security configurations SQL and NoSQL databases Big data technologies (Spark, Hadoop, Kafka) Programming languages (Python, Java, SQL) Data modeling and ETL/ELT processes Infrastructure as Code (Terraform, CloudFormation) Container technologies (Docker, Kubernetes) Data warehousing concepts and dimensional modeling Experience with modern data architecture patterns Real-time and batch data processing architectures Data governance, lineage, and quality frameworks Business intelligence and visualization tools Machine learning pipeline integration Strong communication and presentation abilities Leadership and team collaboration skills Problem-solving and analytical thinking Customer-focused mindset with business acumen Preferred Qualifications: Master’s degree in relevant field Cloud certifications (AWS Solutions Architect, GCP Professional Data Engineer) Experience with multiple cloud platforms Knowledge of data privacy regulations (GDPR, CCPA) Work location: Hyderabad, India Work pattern: Full time role. Work mode: Hybrid. Additional Information: McDonald’s is committed to providing qualified individuals with disabilities with reasonable accommodations to perform the essential functions of their jobs. McDonald’s provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to sex, sex stereotyping, pregnancy (including pregnancy, childbirth, and medical conditions related to pregnancy, childbirth, or breastfeeding), race, color, religion, ancestry or national origin, age, disability status, medical condition, marital status, sexual orientation, gender, gender identity, gender expression, transgender status, protected military or veteran status, citizenship status, genetic information, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. McDonald’s Capability Center India Private Limited (“McDonald’s in India”) is a proud equal opportunity employer and is committed to hiring a diverse workforce and sustaining an inclusive culture. At McDonald’s in India, employment decisions are based on merit, job requirements, and business needs, and all qualified candidates are considered for employment. McDonald’s in India does not discriminate based on race, religion, colour, age, gender, marital status, nationality, ethnic origin, sexual orientation, political affiliation, veteran status, disability status, medical history, parental status, genetic information, or any other basis protected under state or local laws. Nothing in this job posting or description should be construed as an offer or guarantee of employment.
Posted 4 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place – one that benefits lives, communities and the planet Job Title: Software Engineer Senior Location: Chennai Work Type: Hybrid Position Description: As part of the client's DP&E Platform Observability team, you'll help build a top-tier monitoring platform focused on latency, traffic, errors, and saturation. You'll design, develop, and maintain a scalable, reliable platform, improving MTTR/MTTX, creating dashboards, and optimizing costs. Experience with large systems, monitoring tools (Prometheus, Grafana, etc.), and cloud platforms (AWS, Azure, GCP) is ideal. The focus is a centralized observability source for data-driven decisions and faster incident response. Skills Required: Spring Boot, Angular, Cloud Computing Skills Preferred: Google Cloud Platform - Biq Query, Data Flow, Dataproc, Data Fusion, TERRAFORM, Tekton,Cloud SQL, AIRFLOW, POSTGRES, Airflow PySpark, Python, API Experience Required: 5+ years of overall experience with proficiency in Java, angular or any javascript technology with experience in designing and deploying cloud-based data pipelines and microservices using GCP tools like BigQuery, Dataflow, and Dataproc. Ability to leverage best in-class data platform technologies (Apache Beam, Kafka,...) to deliver platform features, and design & orchestrate platform services to deliver data platform capabilities. Service-Oriented Architecture and Microservices: Strong understanding of SOA, microservices, and their application within a cloud data platform context. Develop robust, scalable services using Java Spring Boot, Python, Angular, and GCP technologies. Full-Stack Development: Knowledge of front-end and back-end technologies, enabling collaboration on data access and visualization layers (e.g., React, Node.js). Design and develop RESTful APIs for seamless integration across platform services. Implement robust unit and functional tests to maintain high standards of test coverage and quality. Database Management: Experience with relational (e.g., PostgreSQL, MySQL) and NoSQL databases, as well as columnar databases like BigQuery. Data Governance and Security: Understanding of data governance frameworks and implementing RBAC, encryption, and data masking in cloud environments. CI/CD and Automation: Familiarity with CI/CD pipelines, Infrastructure as Code (IaC) tools like Terraform, and automation frameworks. Manage code changes with GitHub and troubleshoot and resolve application defects efficiently. Ensure adherence to SDLC best practices, independently managing feature design, coding, testing, and production releases. Problem-Solving: Strong analytical skills with the ability to troubleshoot complex data platform and microservices issues. Experience Preferred: GCP Data Engineer, GCP Professional Cloud Education Required: Bachelor's Degree TekWissen® Group is an equal opportunity employer supporting workforce diversity.
Posted 4 days ago
10.0 years
0 Lacs
India
On-site
The Impact - The Director Data Engineering will lead the development and implementation of a comprehensive data strategy that aligns with the organization’s business goals and enables data driven decision making. You will : Build and manage a team of talented data managers and engineers with the ability to not only keep up with, but also pioneer, in this space Collaborate with and influence leadership to directly impact company strategy and direction Develop new techniques and data pipelines that will enable various insights for internal and external customers Develop deep partnerships with client implementation teams, engineering and product teams to deliver on major cross-functional measurements and testing Communicate effectively to all levels of the organization, including executives Provide success in partnering teams with dramatically varying backgrounds, from the highly technical to the highly creative Design a data engineering roadmap and execute the vision behind it Hire, lead, and mentor a world-class data team Partner with other business areas to co-author and co-drive strategies on our shared roadmap Oversee the movement of large amounts of data into our data lake Establish a customer-centric approach and synthesize customer needs Own end-to-end pipelines and destinations for the transfer and storage of all data Manage 3rd-party resources and critical data integration vendors Promote a culture that drives autonomy, responsibility, perfection and mastery. Maintain and optimize software and cloud expenses to meet financial goals of the company Provide technical leadership to the team in design and architecture of data products and drive change across process, practices, and technology within the organization Work with engineering managers and functional leads to set direction and ambitious goals for the Engineering department Ensure data quality, security, and accessibility across the organization About you: 10+ years of experience in data engineering 5+ years of experience leading data teams of 30+ resources or more, including selection of talent planning / allocating resources across multiple geographies and functions. 5+ years of experience with GCP tools and technologies, specifically, Google BigQuery, Google cloud composer, Dataflow, Dataform, etc. Experience creating large-scale data engineering pipelines, data-based decision-making and quantitative analysis tools and software Experience with hands-on to code version control systems (git) Experience with CICD, data architectures, pipelines, quality, and code management Experience with complex, high volume, multi-dimensional data, based on unstructured, structured, and streaming datasets Experience with SQL and NoSQL databases Experience creating, testing, and supporting production software and systems Proven track record of identifying and resolving performance bottlenecks for production systems Experience designing and developing data lake, data warehouse, ETL and task orchestrating systems Strong leadership, communication, time management and interpersonal skills Proven architectural skills in data engineering Experience leading teams developing production-grade data pipelines on large datasets Experience designing a large data lake and lake house experience, managing data flows that integrate information from various sources into a common pool implementing data pipelines based on the ETL model Experience with common data languages (e.g. Python, Scala) and data warehouses (e.g. Redshift, BigQuery, Snowflake, Databricks) Extensive experience on cloud tools and technologies - GCP preferred Experience managing real-time data pipelines Successful track record and demonstrated thought-leadership and cross-functional influence and partnership within an agile / water-fall development environment. Experience in regulated industries or with compliance frameworks (e.g., SOC 2, ISO 27001). Write to sanish@careerxperts.com to get connected !
Posted 4 days ago
5.0 years
0 Lacs
India
Remote
KLDiscovery, a leading global provider of electronic discovery, information governance and data recovery services, is currently seeking a Senior Software Engineer (C++ & C#) for an exciting new opportunity. The position will assist in review and analysis of applications, product development, and enhancements including documentation, code development, and unit testing of releases while adhering to KLDiscovery development processes and workflows with supervision and direction from lead developers and superiors. If you like working in a creative, technology-driven, high energy, collaborative, casual environment, and you have strong software development abilities, this is the opportunity for you! Hybrid or remote, work from home opportunity. Responsibilities Create, Validate and Review program code per specifications. Develop automated unit and API tests. Support bug fixes and implement enhancements to applications in Production. Create, design and review SW documentation. Utilize, communicate, and enforce coding standards. Provide Technical Support to applications in Production within defined SLA. Adhere to development processes and workflows. Assist and mentor team demonstrating technical excellence. Detects problems and areas that need improvement early and raises issues. Qualifications Fluent English (C1) At least 5 years of commercial, hands-on software development experience in C#/.NET and C++ Experience with ASP.NET Core Blazor Experience with Entity Framework Core Experience with desktop applications (Winforms preferred) Experience with background jobs and workers (e.g. Hangfire) Experience with Angular is a plus Creating dataflow/sequence/C4 diagrams Good understanding of at least one of architectural/design patterns: MVC/MVP/MVVM/Clean/Screaming/Hexagonal architectures .NET memory model and performance optimizations solutions Writing functional tests. Writing structure tests. Understanding modularity and vertical slices. Data privacy and securing desktop apps. Ability to design functionalities based on requirements Our Cultural Values Entrepreneurs at heart, we are a customer first team sharing one goal and one vision. We seek team members who are: Humble - No one is above another; we all work together to meet our clients’ needs and we acknowledge our own weaknesses Hungry - We all are driven internally to be successful and to continually expand our contribution and impact Smart - We use emotional intelligence when working with one another and with clients Our culture shapes our actions, our products, and the relationships we forge with our customers. Who We Are KLDiscovery provides technology-enabled services and software to help law firms, corporations, government agencies and consumers solve complex data challenges. The company, with offices in 26 locations across 17 countries, is a global leader in delivering best-in-class eDiscovery, information governance and data recovery solutions to support the litigation, regulatory compliance, internal investigation and data recovery and management needs of our clients. Serving clients for over 30 years, KLDiscovery offers data collection and forensic investigation, early case assessment, electronic discovery and data processing, application software and data hosting for web-based document reviews, and managed document review services. In addition, through its global Ontrack Data Recovery business, KLDiscovery delivers world-class data recovery, email extraction and restoration, data destruction and tape management. KLDiscovery has been recognized as one of the fastest growing companies in North America by both Inc. Magazine (Inc. 5000) and Deloitte (Deloitte’s Technology Fast 500). Additionally, KLDiscovery is an Orange-level Relativity Best in Service Partner, a Relativity Premium Hosting Partner and maintains ISO/IEC 27001 Certified data centers. KLDiscovery is an Equal Opportunity Employer.
Posted 5 days ago
0 years
0 Lacs
Ballabgarh, Haryana, India
On-site
Revenir aux offres Stagiaire Data Engineer (LOB25-STA-07) Nature Data Engineer Contrat Stage 6 mois Expérience Moins d'1 an Lieu de travail Paris / Région parisienne A Propos Missions Le stage s’inscrit dans le cadre de la mise en place de briques additionnelles au sein une solution de reporting et d’analyse pour un groupe de services pour les grands acteurs industriels. Le projet consiste à mettre en œuvre l’ensemble de la chaine de valorisation de données pour un domaine fonctionnel : modélisation des bases de données, conception et réalisation des alimentations et des espaces de reporting et d’analyse. Vous interviendrez au sein d’une équipe constituée de 2 consultants expérimentés et d’un chef de projet. Vous prendrez en charge un périmètre de réalisation avec le développement et la qualification des développements en utilisant toute la chaîne DATA de Microsoft AZURE (Data Lake Storage / File Storage / SQL Server / DataFactory / DataFlow / AZURE functions) ainsi que les outils DATA VIZ (POWER BI). Cette mission vous permettra de développer des compétences sur l’architecture applicative d’un système décisionnel, la mise en œuvre d’une solution BI sous Microsoft AZURE et la mise en place de modules de Dataviz sous POWER BI Descriptif du poste Travaux Assurés Apprentissage de la méthodologie de mise en œuvre d’un projet décisionnel notamment sur les dimensions suivantes: – Technologie et contraintes d’infrastructure pour atteindre les performances visées – Modélisation des bases de données et découpage des processus d’alimentation et d’administration au sein d’un système décisionnel Mise en pratique par la participation aux travaux de spécifications et de mise en œuvre sur les fonctions d’alimentation et de restitution (spécifications techniques, développement et tests, recette client et mise en production). A partir des spécifications fonctionnelles, vous produirez les spécifications techniques et développerez les composants de la solution Modèle de données Chargement / alimentation de la base de données Espaces d’analyse et Dashboards POWERBI Tests unitaires puis d’intégration de l’ensemble de la solution Suivi et assistance du Client lors du processus de validation de la solution Mise à jour / Rédaction des guides utilisateurs et administrateur Vous bénéficierez de toute l’expertise de LOBELLIA Conseil en termes de conduite et de méthodologie de construction de solutions décisionnelles. Ce Stage Vous Permettra D’acquérir La vision architecturale d’un système décisionnel à l’état de l’art Une vision des spécificités de la démarche et de la gestion d’un projet décisionnel De solides compétences techniques à travers l’expertise des consultants de l’équipe et vos propres réalisations Une première expérience de prise en charge des travaux sur tout le cycle de vie de la solution Environnement Technique Bases de données SGBDR : SQL Server Solution DATA de MICROSOFT AZURE Solution de reporting et d’analyse : Power BI Profil recherché Etudiant en dernière année d’école d’ingénieur ou Master 2 scientifique. Compétences Requises Compétences techniques : SGBD, SQL Techniques de programmation Première approche de l’informatique décisionnelle Qualités requises : Double intérêt technique / fonctionnel Qualités rédactionnelles Esprit d’analyse Rigueur Sens du service Aisance relationnelle Postuler Ce champs est requis. Ce champs est requis. Ce mail n'est pas valide. CV ** Ce champs est requis. Lettre de motivation Vous nous avez connus par... Les réseaux sociaux Un forum ou un événement école Une connaissance Autre Champs requis Fichier requis, au format pdf, poids inférieur à 5Mo Merci, votre mail a été envoyé.
Posted 5 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About Us: Transcloud is a cloud technology services company that helps businesses adopt the cloud to empower them for the future. Job Description: We are seeking a skilled and experienced Cloud Engineer to join our dynamic team. As a Cloud Engineer, you will play a crucial role in implementing and managing cloud architectures for our client’s software applications. Your strong expertise in Google Cloud Platform (GCP) implementations, programming languages, and cloud ecosystem design will contribute to the success of our cloud-based solutions. We are offering a highly competitive salary commensurate with industry standards. Minimum Qualifications: - Demonstrated experience in implementing cloud architecture for software applications. - Extensive expertise in GCP implementations. - Proficiency in at least one of the programming/scripting languages. - Proficient in using Linux CLI commands and Google Cloud SDK. - Ability to design holistic cloud ecosystems with a focus on Google Cloud Platform capabilities and features. - Familiarity with Cloud Shell and GCP commands such as gcloud and gsutil. - Hands-on experience with Kubernetes, DevOps, developing and managing CI/CD pipelines. - Hands-on experience with GCP IaaS services such as GCE, GAE, GKE, VPC, DNS, Interconnect VPN, CDN, Cloud Storage, FileStore, Firebase, Deployment Manager, and Stackdriver. - Familiarity with GCP services including Cloud Endpoints, Dataflow, Dataproc, Datalab, Dataprep, Cloud Composer, Pub/Sub, and Cloud Functions. Responsibilities: - Troubleshoot issues, actively seeking out problems, and providing effective solutions. - Implementing HA and DR solutions - Be an active participant in the running of the team, fostering a great place to work. - Engage with the wider business to identify opportunities for future work for the team. - Experiment with new technologies to help push the boundaries of what the team is building. Requirements: - Professional certifications related to cloud platforms, specifically Google Cloud Platform. - Knowledge of containerization technologies (e.g., Docker, Kubernetes). - Familiarity with DevOps practices and tools. - Understanding basic network and security principles in cloud environments. - Experience with automation and infrastructure-as-code tools, preferably terraform - Experience with other cloud platforms such as AWS or Azure is a good to have. - Excellent problem-solving and analytical skills. - Strong communication and collaboration abilities. Benefits : ✔ Health Insurance for a worry-free lifestyle. ✔ Flexible work hours for better work-life balance. ✔ Informal dress code to express your individuality. ✔ Enjoy a 5-day work week to pursue your passions outside of work. ✔ Exposure to work directly with clients and grow up in the career.
Posted 5 days ago
0.0 years
5 - 12 Lacs
Chennai, Tamil Nadu
On-site
About Us: Transcloud is a cloud technology services company that helps businesses adopt the cloud to empower them for the future. We are seeking a skilled and experienced Cloud Engineer to join our dynamic team. As a Cloud Engineer, you will play a crucial role in implementing and managing cloud architectures for our client’s software applications. Your strong expertise in Google Cloud Platform (GCP) implementations, programming languages, and cloud ecosystem design will contribute to the success of our cloud-based solutions. We are offering a highly competitive salary commensurate with industry standards. Minimum Qualifications: - Demonstrated experience in implementing cloud architecture for software applications. - Extensive expertise in GCP implementations. - Proficiency in at least one of the programming/scripting languages. - Proficient in using Linux CLI commands and Google Cloud SDK. - Ability to design holistic cloud ecosystems with a focus on Google Cloud Platform capabilities and features. - Familiarity with Cloud Shell and GCP commands such as gcloud and gsutil. - Hands-on experience with Kubernetes, DevOps, developing and managing CI/CD pipelines. - Hands-on experience with GCP IaaS services such as GCE, GAE, GKE, VPC, DNS, Interconnect VPN, CDN, Cloud Storage, FileStore, Firebase, Deployment Manager, and Stackdriver. - Familiarity with GCP services including Cloud Endpoints, Dataflow, Dataproc, Datalab, Dataprep, Cloud Composer, Pub/Sub, and Cloud Functions. Responsibilities: - Troubleshoot issues, actively seeking out problems, and providing effective solutions. - Implementing HA and DR solutions - Be an active participant in the running of the team, fostering a great place to work. - Engage with the wider business to identify opportunities for future work for the team. - Experiment with new technologies to help push the boundaries of what the team is building. Requirements : - Professional certifications related to cloud platforms, specifically Google Cloud Platform. - Knowledge of containerization technologies (e.g., Docker, Kubernetes). - Familiarity with DevOps practices and tools. - Understanding basic network and security principles in cloud environments. - Experience with automation and infrastructure-as-code tools, preferably terraform - Experience with other cloud platforms such as AWS or Azure is a good to have. - Excellent problem-solving and analytical skills. - Strong communication and collaboration abilities. Benefits ✔ Health Insurance for a worry-free lifestyle. ✔ Flexible work hours for better work-life balance. ✔ Informal dress code to express your individuality. ✔ Enjoy a 5-day work week to pursue your passions outside of work. ✔ Exposure to work directly with clients and grow up in the career. Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹1,200,000.00 per year Schedule: Day shift Location: Chennai, Tamil Nadu (Required) Work Location: In person Speak with the employer +91 9361930728
Posted 5 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Hi, We are hiring for Java developer 5+ years of experience as a Java developer with expertise in distributed systems and data processing pipelines Strong understanding of Google Cloud Dataflow, Apache Beam, Kafka, Splunk, and related technologies Proficiency in Java programming language and familiarity with relevant frameworks such as Spring Boot or Hibernate Experience working with big data platforms and solving large scale data processing challenges Looking for immediate joiner
Posted 5 days ago
15.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the organisation DataFlow Group, founded in 2007, is a global leader in Primary Source Verification (PSV), background screening, and immigration compliance solutions. The business works with a range of global public and private sector organisations to mitigate risk by validating credentials and detecting fraudulent documents, safeguarding communities and organisations worldwide. With over 160,000 issuing authorities across more than 200 countries, DataFlow is at the forefront of trust and transparency in talent verification. The mission is simple: Empower talent to navigate careers without borders, with trust and transparency. To learn more about DataFlow Group, please visit : https://www.dataflowgroup.com/ . Role context and Summary DataFlow Group is seeking a highly skilled and results-driven Quality Assurance Director to lead the end-to-end quality assurance function for the rollout and ongoing delivery of our new Apex Platform - a mission critical system that supports primary source verification for professional credentials, licences and work experience. This role is instrumental in designing, implementing and governing the test strategy, quality assurance processes, and associated tooling that ensure the platform meets the highest standards of functionality, usability, performance and scalability. The ideal candidate will have deep experience in leading quality assurance and test teams, driving test automation, and building the testing components of CI/CD pipelines to support fast, iterative and high quality delivery. Key responsibilities Test Strategy & Governance Define and own the overall end-to-end quality assurance strategy, covering functional, non functional, integration, regression, team capability, tooling strategy and test KPIs to measure the effectiveness of the test strategy. Develop and implement a test governance framework to ensure test traceability, coverage and quality control. 2 Tooling Strategy Define and implement a test tool strategy, selecting and configuring, managing the test tools and frameworks (e.g. Selenium, Playwright, Cypress, Postman, JMeter, Gitlab, Sonarqube, DevOps, Jenkins to seamlessly work together in the CI pipeline. 3 CI/CD Design and implement the testing architecture within the CI/CD pipeline to support automated build, test and deployment cycles. Work closely with the engineering team to integrate automated tests (unit, API, UI, functional, and regression into the CI/CD workflows. Establish shift left testing practices, enabling earlier detection of defects in the SDLC. Build reusable test libraries and test automation suites to accelerate regression and release testing. 4 Team Leadership & Collaboration Lead a cross-functional team of test engineers, automation specialists and manual testers. Foster a culture of quality, continuous testing, and proactive risk identification. Work closely with Product Engineering and Business Operations, to align on priorities and milestones. 5 Performance & Scalability Develop and refine the platform volumetrics, manage the benchmarking activities to establish a baseline. Plan and execute performance testing aligned to volumetric benchmarks, SLAs and peak scenarios, by managing an external vendor for this exercise. Validate platform stability and scalability through repeatable test cycles and proactive risk identification. Provide assurance on platform readiness for client migrations and high volume activity. 6 Operational Excellence Develop and maintain test metrics and reporting dashboards to inform stakeholders of quality status, test progress and defect trends Essential requirements and qualifications Minimum of 15+ years in the software development industry, with 3+ years as a Test/QA Manager. Proven experience with designing and running test strategies for complex platform rollouts. Deep knowledge of QA methodologies, Agile delivery and DevOps practices. Experience in building and maintaining automated test pipelines in AWS CI/CD environments. Hands-on experience with tools such as Selenium, Cypress, Playwrite, JMeter, Gitlab, and Jenkins. Familiar with working within a Hyperscalar environment such as AWS, GCP or Azure. Ability to manage test planning, defect triage, and test sign-off for large scale programs. Strong stakeholder communications and leadership skills Experienced with API testing, microservices, data migrations. ISTQB or other formal testing certifications.
Posted 5 days ago
0 years
0 Lacs
Mumbai Metropolitan Region
Remote
Role : Database Engineer Location : Remote Notice Period : 30 Days Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently.
Posted 5 days ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Data Engineer – GCP Location: On-Site (Hyderabad / Bangalore) Experience: 5+ Years Employment Type: Full-Time Job Overview: We are seeking a highly skilled and motivated Data Engineer with a strong background in Google Cloud Platform (GCP) and data processing frameworks. The ideal candidate will have hands-on experience in building and optimizing data pipelines, architectures, and data sets using GCP services like BigQuery, Dataflow, Pub/Sub, GCS, and Cloud Composer. Key Responsibilities: Design, build, and maintain scalable and efficient data pipelines on GCP. Implement data ingestion, transformation, and orchestration using GCP services: BigQuery, Dataflow, Pub/Sub, GCS, and Cloud Composer . Write complex and optimized SQL queries for data transformation and analytics. Develop Python scripts for custom transformations and pipeline logic. Orchestrate workflows using Apache Airflow (Cloud Composer). Collaborate with DevOps to implement CI/CD pipelines using Jenkins , GitLab , and Terraform . Ensure data quality, governance, and reconciliation across systems. Required Skills: GCP Expertise : BigQuery, Dataflow, Pub/Sub, GCS, Cloud Composer Languages : Advanced SQL, Python (strong scripting and data transformation experience) DevOps & IaC : Basic experience with Terraform, Jenkins, and GitLab Data Orchestration : Strong experience with Apache Airflow Nice-to-Have Skills: Containerization & Cluster Management : GKE (Google Kubernetes Engine) Big Data Ecosystem : Bigtable, Kafka, Hadoop CDC/Data Sync : Oracle GoldenGate Distributed Processing : PySpark Data Auditing : Data Reconciliation frameworks or strategies
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough