Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 - 15.0 years
0 - 0 Lacs
Chennai
Work from Office
Who we are looking for: We are seeking a skilled and motivated engineer to join our Data Infrastructure team. The Data Infrastructure engineering team is responsible for the tools and backend infrastructure that supports our data platform to optimize in performance scalability and reliability. This role requires strong focus and experience in multi-cloud based technologies, message bus systems, automated deployments using containerized applications, design, development, database management and performance, SOX compliance requirements, and implementation of infrastructure using automation through terraform and continuous delivery and batch-oriented workflows. As a Data Infrastructure Engineer at ACV Auctions, you will work alongside and mentor software and production engineers in the development of solutions to ACV’s most complex data and software problems. You will be an engineer that is able to operate in a high performing team, that can balance high quality deliverables with customer focus, have excellent communication skills, desire and ability to mentor and guide engineers, and have a record of delivering results in a fast paced environment. It is expected that you are a technical liaison that you can balance high quality delivery with customer focus, that you have excellent communication skills, and that you have a record of delivering results in a fast-paced environment. What you will be doing: Collaborate with cross-functional teams, including Data Scientists, Software Engineers, Data Engineers, and Data Analysts, to understand data requirements and translate them into technical specifications. Influence company wide engineering standards for databases, tooling, languages, and build systems. Design, implement, and maintain scalable and high-performance data infrastructure solutions, with a primary focus on data. Design, implement, and maintain tools and best practices for (but not limited to) access control, data versioning, database management, and migration strategies. Contribute, influence, and set standards for all technical aspects of a product or service including, but not limited to, coding, testing, debugging, performance, languages, database selection, management and deployment. Identify and troubleshoot database/system issues and bottlenecks, working closely with the engineering team to implement effective solutions. Write clean, maintainable, well-commented code and automation to support our data infrastructure layer. Perform code reviews, develop high-quality documentation, and build robust test suites for your products. Provide technical support for databases, including troubleshooting, performance tuning, and resolving complex issues. Collaborate with software and DevOps engineers to design scalable services, plan feature roll-out, and ensure high reliability and performance of our products. Collaborate with development teams and data science teams to design and optimize database schemas, queries, and stored procedures for maximum efficiency. Participate in the SOX audits, including creation of standards and reproducible audit evidence through automation Create and maintain documentation for database and system configurations, procedures, and troubleshooting guides. Maintain and extend (as required) existing database operations solutions for backups, index defragmentation, data retention, etc. Respond-to and troubleshoot highly complex problems quickly, efficiently, and effectively. Accountable for the overall performance of products and/or services within a defined area of focus. Be part of the on-call rotation. Handle multiple competing priorities in an agile, fast-paced environment. Perform additional duties as assigned What you need: Bachelor’s degree in computer science, Information Technology, or a related field (or equivalent work experience) Ability to read, write, speak, and understand English Strong communication and collaboration skills, with the ability to work effectively in a fast paced global team environment 1+ years experience architecting, developing, and delivering software products with emphasis on data infrastructure layer 1+ years work with continuous integration and build tools. 1+ years experience programing in Python 1+ years experience with Cloud platforms preferably in GCP/AWS Knowledge in day-day tools and how they work including deployments, k8s, monitoring systems, and testing tools. Knowledge in version control systems including trunk-based development, multiple release planning, cherry picking, and rebase. Hands-on skills and the ability to drill deep into the complex system design and implementation. Experience with: DevOps practices and tools for database automation and infrastructure provisioning Programming in Python, SQL Github, Jenkins Infrastructure as code tooling, such as terraform, preferred Big data technologies and distributed databases Nice to Have Qualifications: Experience with NoSQL data stores Airflow, Docker, Containers, Kubernetes, DataDog, Fivetran Database monitoring and diagnostic tools, preferably Data Dog. Database management/administration with PostgreSQL, MySQL, Dynamo, Mongo GCP/BigQuery, Confluent Kafka Using and integrating with cloud services, specifically: AWS RDS, Aurora, S3, GCP Service Oriented Architecture/Microservices and Event Sourcing in a platform like Kafka (preferred) Familiarity with DevOps practices and tools for automation and infrastructure provisioning. Hands-on experience with SOX compliance requirements Knowledge of data warehousing concepts and technologies, including dimensional modeling and ETL frameworks Knowledge of database design principles, data modeling, architecture, infrastructure, security principles, best practices, performance tuning and optimization techniques #LI-AM1
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You will be responsible for designing, developing, and optimizing data models within the Celonis Execution Management System (EMS). Your duties will include extracting, transforming, and loading (ETL) data from flat files and UDP into Celonis. It is essential to work closely with business stakeholders and data analysts to understand data requirements and ensure an accurate representation of business processes. Additionally, you will be required to develop and optimize PQL (Process Query Language) queries for process mining. Collaboration with group data engineers, architects, and analysts is crucial to ensure high-quality data pipelines and scalable solutions. Data validation, cleansing, and transformation will also be part of your responsibilities to enhance data quality. Monitoring and troubleshooting data integration pipelines to ensure performance and reliability are key tasks. You will also provide guidance and best practices for data modeling in Celonis. To qualify for this role, you should have a minimum of 5 years of experience in data engineering, data modeling, or related roles. Proficiency in SQL, ETL processes, and database management (e.g., PostgreSQL, Snowflake, BigQuery, or similar) is required. Experience working with large-scale datasets and optimizing data models for performance is essential. Your data management experience must cover the data lifecycle and critical functions such as data profiling, data modeling, data engineering, and data consumption products and services. Strong problem-solving skills are necessary, along with the ability to work in an agile, fast-paced environment. Excellent communication skills and demonstrated hands-on experience in communicating technical topics with non-technical audiences are expected. You should be able to effectively collaborate and manage the timely completion of assigned activities while working in a highly virtual team environment. Excellent collaboration skills to work with cross-functional teams will also be essential for this role.,
Posted 1 week ago
5.0 - 10.0 years
15 - 20 Lacs
Madurai, Chennai
Work from Office
Dear Candidate, Greetings of the day!! I am Kantha, and I'm reaching out to you regarding an exciting opportunity with TechMango. You can connect with me on LinkedIn https://www.linkedin.com/in/kantha-m-ashwin-186ba3244/ Or Email: kanthasanmugam.m@techmango.net Techmango Technology Services is a full-scale software development services company founded in 2014 with a strong focus on emerging technologies. It holds a primary objective of delivering strategic solutions towards the goal of its business partners in terms of technology. We are a full-scale leading Software and Mobile App Development Company. Techmango is driven by the mantra Clients Vision is our Mission. We have a tendency to stick on to the current statement. To be the technologically advanced & most loved organization providing prime quality and cost-efficient services with a long-term client relationship strategy. We are operational in the USA - Chicago, Atlanta, Dubai - UAE, in India - Bangalore, Chennai, Madurai, Trichy. Job Title: GCP Data Engineer Location: Madurai Experience: 5+ Years Notice Period: Immediate Job Summary We are seeking a hands-on GCP Data Engineer with deep expertise in real-time streaming data architectures to help design, build, and optimize data pipelines in our Google Cloud Platform (GCP) environment. The ideal candidate will have strong architectural vision and be comfortable rolling up their sleeves to build scalable, low-latency streaming data pipelines using Pub/Sub, Dataflow (Apache Beam) , and BigQuery . Key Responsibilities Architect and implement end-to-end streaming data solutions on GCP using Pub/Sub , Dataflow , and BigQuery . Design real-time ingestion, enrichment, and transformation pipelines for high-volume event data. Work closely with stakeholders to understand data requirements and translate them into scalable designs. Optimize streaming pipeline performance, latency, and throughput. Build and manage orchestration workflows using Cloud Composer (Airflow) . Drive schema design, partitioning, and clustering strategies in BigQuery for both real-time and batch datasets. Define SLAs, monitoring, logging, and alerting for streaming jobs using Cloud Monitoring , Error Reporting , and Stackdriver . Experience with the data modeling. Ensure robust security, encryption, and access controls across all data layers. Collaborate with DevOps for CI/CD automation of data workflows using Terraform , Cloud Build , and Git . Document streaming architecture, data lineage, and deployment runbooks. Required Skills & Experience 5+ years of experience in data engineering or architecture. 3+ years of hands-on GCP data engineering experience. Strong expertise in: Google Pub/Sub Dataflow (Apache Beam) BigQuery (including streaming inserts) Cloud Composer (Airflow) Cloud Storage (GCS) Solid understanding of streaming design patterns , exactly-once delivery , and event-driven architecture . Deep knowledge of SQL and NoSQL data modeling. Hands-on experience with monitoring and performance tuning of streaming jobs. Experience using Terraform or equivalent for infrastructure as code. Familiarity with CI/CD pipelines for data workflows.
Posted 1 week ago
8.0 - 13.0 years
15 - 27 Lacs
Bengaluru
Hybrid
Job Description: We are seeking a visionary and experienced Senior Data Architect to lead the design and implementation of our enterprise-wide data architecture. The role requires a solid foundation in Java, Spring, SQL , and strong knowledge of modern data platforms and cloud technologies like Azure Databricks, Snowflake, BigQuery , etc. You will be responsible for modernizing our data infrastructure, ensuring security and accessibility of data assets, and providing strategic direction to the data team. Key Responsibilities: Define and implement enterprise data architecture aligned with organizational goals. Design and lead scalable, secure, and resilient data platforms for structured & unstructured data. Architect cloud-native data solutions using tools like Databricks, Snowflake, Redshift, BigQuery . Lead design and integration of data lakes, warehouses, and ETL pipelines. Collaborate with cross-functional teams and leadership to define data needs and deliver solutions. Guide data engineers and analysts in best practices, modeling, and governance. Drive initiatives around data quality, metadata, lineage, and master data management (MDM) . Ensure compliance with data privacy regulations (GDPR, HIPAA, CCPA). Lead modernization/migration of legacy systems to modern cloud platforms. Must-Have Skills: Strong expertise in Java, Spring Framework, and SQL . Experience with Azure Databricks or similar cloud data platforms. Hands-on with Snowflake , BigQuery , Redshift , or Azure Synapse . Deep understanding of data modeling tools like Erwin or ER/Studio. Proven experience designing data platforms in hybrid/multi-cloud setups. Strong background in ETL/ELT pipelines , data APIs , and integration . Proficient in Python or similar languages used for data engineering. Knowledge of DevOps and CI/CD processes in data pipelines. Preferred Qualifications: 10+ years of experience in Data Architecture. At least 3 years in a senior or lead role. Familiarity with data governance , security policies , identity management , and RBAC . Excellent leadership, communication, and stakeholder management skills.
Posted 1 week ago
4.0 - 8.0 years
12 - 22 Lacs
Hyderabad
Work from Office
Role & responsibilities Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or related technical field. Minimum 4+ years of hands-on experience in data engineering with strong expertise in data warehousing, pipeline development, and analytics on cloud platforms. Expert-level experience in: Google BigQuery for large-scale data warehousing and analytics. Python for data processing, orchestration, and scripting. SQL for data wrangling, transformation, and query optimization. DBT for developing modular and maintainable data transformation layers. Airflow / Cloud Composer for workflow orchestration and scheduling. Proven experience building enterprise-grade ETL/ELT pipelines and scalable data architectures. Strong understanding of data quality frameworks, validation techniques, and governance processes. Proficiency in Agile methodologies (Scrum/Kanban) and managing IT backlogs in a collaborative, iterative environment. Preferred experience with: Tools like Ascend.io, Databricks, Five tran, or Dataflow. Data cataloging/governance tools (e.g., Collibra). CI/CD tools, Git workflows, and infrastructure automation. Real-time/event-driven data processing using Pub/Sub, Kafka, or similar platforms. Strategic problem-solving skills and ability to architect innovative solutions. Ability to adapt quickly to new technologies and lead adoption across teams. Excellent communication skills and ability to influence cross-functional teams. Good experience on Agile Methodologies like Scrum, Kanban, and managing IT backlog. Be a go-to” expert for data technologies and solutions
Posted 1 week ago
5.0 - 10.0 years
15 - 30 Lacs
Bengaluru
Remote
Google Analytics Consultant Location: Remote Experience: 5-10 years Focus: Owning digital analytics across Power BI, GA, and eCommerce performance Key Skills: Google Analytics, Power BI, SQL, BigQuery, Python, JavaScript Responsibilities: Manage end-to-end analytics infrastructure Build dashboards and analyze user behavior Lead A/B testing, performance marketing integrations Collaborate cross-functionally with product, marketing, and IT teams
Posted 1 week ago
10.0 - 15.0 years
20 Lacs
Chennai
Work from Office
Candidate Specification: Any Graduate, Min 10+; years relevant Experience; Job Description: Strong hands-on experience with the following: Snowflake, Redshift, Big Query.; Proficiency in; Data Build Tool - DBT and SQL-based data modeling and transformation. Solid understanding of data warehousing concepts, star/snowflake schemas, and performance optimization. Experience with modern ETL/ELT tools and cloud-based data pipeline frameworks. Familiarity with version control systems (e.g., Git) and CI/CD practices for data workflows. Strong problem-solving skills and attention to detail. Should have excellent Inter Personal skill. Contact Person: Deepikad Email ID : deepikad@gojobs.biz
Posted 1 week ago
13.0 - 20.0 years
35 - 70 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Required Skills and Experience 13+ Years is a must with 7+ years of relevant experience working on Big Data Platform technologies. Proven experience in technical skills around Cloudera, Teradata, Databricks, MS Data Fabric, Apache, Hadoop, Big Query, AWS Big Data Solutions (EMR, Redshift, Kinesis, Qlik) Good Domain Experience in BFSI or Manufacturing area . Excellent communication skills to engage with clients and influence decisions. High level of competence in preparing Architectural documentation and presentations. Must be organized, self-sufficient and can manage multiple initiatives simultaneously. Must have the ability to coordinate with other teams independently. Work with both internal/external stakeholders to identify business requirements, develop solutions to meet those requirements / build the Opportunity. Note: If you have experience in BFSI Domain than the location will be Mumbai only If you have experience in Manufacturing Domain the location will be Mumbai & Bangalore only. Interested candidates can share their updated resumes on shradha.madali@sdnaglobal.com
Posted 1 week ago
15.0 - 22.0 years
35 - 70 Lacs
Hyderabad, Bengaluru, Delhi
Work from Office
Job Role We are looking for Solution Architects for designing data management solutions, having strong knowledge of architecting and designing highly available and scalable database on cloud He will deliver hands-on, business-oriented strategic and technical consulting to requirements towards cloud native and marketplace data / database management architecture and solutions Key Responsibilities • Designing PaaS and IaaS database technology (RDBMS, NoSQL, Distributed database) • Designing cloud infrastructure services (Compute, Storage, Network etc) for DB deployment • Design Database Authentication and Authorization (IAM, RBAC) solution • Capacity planning, performance analysis and database optimization to manage DB workload • Analysing and Identifying infrastructure requirements for on premise, and on other cloud environments like Azure, Google, AWS • Designing High Availability and Disaster Recovery solution for Database deployment on IaaS and PaaS platform • Designing database Backup and Recovery solution using native or enterprise backup solution • Designing database / data management and optimization job / task automation • Designing Homogeneous and Heterogeneous database migration solution within On-Premise or On-Premise to Cloud (IaaS and PaaS) • Designing database monitoring, alert notification/reporting, data masking/encryption solutions • Designing ETL / ELT solution for data ingestion and data transformation • Mentor implementation teams, handhold when needed on best practices and make sure the solution is implemented in right way • Prepare high-level and low-level design document as required for implementation team • Databases Technology and DB Services: Azure SQL, Azure SQL MI, PostgreSQL, MySQL, Oracle, SQL Server, AWS RDS, Amazon Aurora, Cloud SQL, Cloud Spanner, Cosmos DB, Azure Synapse Analytics / Google BigQuery / Amazon Redshift Educational Qualifications : Bachelor's degree in Engineering / Computer Science, Computer Engineering, Information Technology. (B.Tech / BE/ M.Tech/ MCA) - Full time What are the nature and scope of responsibilities the candidate should have handled? • Understand customer's overall data estate, business principles, operations, and discover / assessment database workload • Designing / complex, highly available, distributed, failsafe Cloud manage and unmanage database • HLD and LLD document preparation • Evaluate and recommend Cloud manage database services best suited for customer needs for optimal solution • Drive Cloud manage database technology initiatives end to end and across multiple layers of architecture • Provides strong technical leadership in adopting and contributing to open source / Cloud manage / Cloud unmanage database technologies Knowledge & Skills • Understanding of Public / Private / Hybrid Cloud solutions and Database services on Cloud • Extensive experience in conducting Cloud Readiness Assessments for database environment and observing business / technical perspectives • Knowledge of Cloud best practices and guidelines for database deployment • Knowledge of cloud native HA-DR and database backup solutions • Experience and Strong knowledge of Reference Architectures of Azure / GCP • Azure / GCP certified Architect (preferred) • Good Oral and Written communication • Ability to work on a distributed and multi-cultural team • Good understanding of ITSM processes and related tools • Willing to learn and explore new technologies
Posted 1 week ago
5.0 - 10.0 years
4 - 9 Lacs
Chennai, Bengaluru
Work from Office
Dear Candidate, This is with reference to your profile on the job portal. Deloitte India Consulting has an immediate requirement for the following role. Job Notice period: Looking for immediate 4 Weeks (Max) Location : Any Job Description – Skill : GCP Data Engineer Incase if you are interested, please share your updated resume along with the following details.(Mandatory) to Smouni@deloitte.com Candidate Name Mobile No. Email ID Skill Total Experience Education Details Current Location Requested location Current Firm Current CTC Exp CTC Notice Period/LWD Feedback
Posted 1 week ago
15.0 - 20.0 years
5 - 9 Lacs
Mumbai
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Data Engineering Good to have skills : NAMinimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing application features, and ensuring that the applications function seamlessly within the business environment. You will also engage in testing and troubleshooting to enhance application performance and user experience, while continuously seeking ways to improve processes and deliver high-quality solutions. Roles & Responsibilities:Design, build, and maintain scalable ETL/ELT pipelines for structured and unstructured data.Monitor and analyze key performance metrics (e.g., CTR, CPC, ROAS) to support business objectives Implement real-time data workflows with anomaly detection and performance reporting.Develop and maintain data infrastructure using tools such as Spark, Hadoop, Kafka, and AirflowCollaborate with DevOps teams to deploy data solutions in containerized environments (Docker, Kubernetes).Partner with data scientists to prepare, cleanse, and transform data for modeling.Support the development of predictive models using tools like BigQuery ML and Scikit-learn Work closely with stakeholders across product, design, and executive teams to understand data needs Ensure compliance with data governance, privacy, and security standards. Professional & Technical Skills: 1-2 years of experience in data engineering or a similar role.Familiarity with cloud platforms (AWS, GCP, or Azure) and big data tools (Hive, HBase, Spark).Familiarity with DevOps practices and CI/CD pipelines. Additional InformationThis position is based at our Mumbai office.Masters degree in Computer Science, Engineering, or a related field. Qualification 15 years full time education
Posted 1 week ago
5.0 - 10.0 years
18 - 22 Lacs
Bengaluru
Work from Office
Job Title Data & AI Telecom Analytics Level 9 - S&C Global Network CMT Management Level:9 Consultant Location:Bangalore/Gurgaon/Hyderabad/Mumbai Must have skills: Data Science, Predictive Analytics, Model building, Model Production, Data Analysis, Data Visualization, Storytelling, Communication, Teamwork Good to have skills: Vertex AI, GEN AI, LLMs, RAGs Accenture Strategy & Consulting Global Network - Data & AI practice help our clients grow their business in entirely new ways. Analytics enables our clients to achieve high performance through insights from data - insights that inform better decisions and strengthen customer relationships. From strategy to execution, Accenture works with organizations to develop analytic capabilities - from accessing and reporting on data to predictive modelling - to outperform the competition As part of our Data & AI practice, you will join a worldwide network of over 20,000 smart and driven colleagues experienced in leading AI/ML/Statistical tools, methods and applications. From data to analytics and insights to actions, our forward-thinking consultants provide analytically informed, issue-based insights at scale to help our clients improve outcomes and achieve high performance. About Comms & Media practice within Strategy & Consulting: Comms & Media (C&M) is one of the Industry Practices within Accentures S&C Global Network team. It focuses in serving clients across specific Industries Communications, Media & Entertainment. Communications Focuses primarily on industries related with telecommunications and information & communication technology (ICT). This team serves most of the worlds leading wireline, wireless, cable and satellite communications and service providers Media & Entertainment Focuses on industries like broadcast, entertainment, print and publishing Globally, Accenture Comms & Media practice works to develop value growth strategies for its clients and infuse AI & GenAI to help deliver top their business imperatives i.e., revenue growth & cost reduction. From multi-year Data & AI transformation projects to shorter more agile engagements, we have a rapidly expanding portfolio of hyper-growth clients and an increasing footprint with next-gen solutions and industry practices. Roles & ResponsibilitiesA Telco domain experienced and data science consultant is responsible to help the clients with designing & delivering AI solutions. He/she should be strong in Telco domain, AI fundamentals and should have good hands-on experience working with the following: Ability to work with large data sets and present conclusions to key stakeholders; Data management using SQL. Propose solutions to the client based on gap analysis for the existing Telco platforms that can generate long term & sustainable value to the client. Gather business requirements from client stakeholders via interactions like interviews and workshops with all stakeholders Track down and read all previous information on the problem or issue in question. Explore obvious and known avenues thoroughly. Ask a series of probing questions to get to the root of a problem. Ability to understand the as-is process; understand issues with the processes which can be resolved either through Data & AI or process solutions and design detail level to-be state Understand customer needs and identify/translate them to business requirements (business requirement definition), business process flows and functional requirements and be able to inform the best approach to the problem. Adopt a clear and systematic approach to complex issues (i.e. A leads to B leads to C). Analyze relationships between several parts of a problem or situation. Anticipate obstacles and identify a critical path for a project. Independently able to deliver products and services that empower clients to implement effective solutions. Makes specific changes and improvements to processes or own work to achieve more. Work with other team members and make deliberate efforts to keep others up to date. Establish a consistent and collaborative presence with clients and act as the primary point of contact for assigned clients; escalate, track, and solve client issues. Partner with clients to understand end clients business goals, marketing objectives, and competitive constraints. Storytelling Crunch the data & numbers to craft a story to be presented to senior client stakeholders. Should be able to travel or relocate to Japan as per business requirement. Professional & Technical Skills: Overall 5+ years of experience in Data Science & at least 3 years in Telecom Analytics Masters (MBA/MSc/MTech) from a Tier 1/Tier 2 and Engineering from Tier 1 school Demonstrated experience in solving real-world data problems through Data & AI Direct onsite experience (i.e., experience of facing client inside client offices in India or abroad) is mandatory. Please note we are looking for client facing roles. Proficiency with data mining, mathematics, and statistical analysis Advanced pattern recognition and predictive modeling experience; knowledge of Advanced analytical fields in text mining, Image recognition, video analytics, IoT etc. Execution level understanding of econometric/statistical modeling packages Traditional techniques like Linear/logistic regression, multivariate statistical analysis, time series techniques, fixed/Random effect modelling. Machine learning techniques like - Random Forest, Gradient Boosting, XG boost, decision trees, clustering etc. Knowledge of Deep learning modeling techniques like RNN, CNN etc. Experience using digital & statistical modeling software (one or more) Python, R, PySpark, SQL, BigQuery, Vertex AI Proficient in Excel, MS word, Power point, and corporate soft skills Knowledge of Dashboard creation platforms Excel, tableau, Power BI etc. Excellent written and oral communication skills with ability to clearly communicate ideas and results to non-technical stakeholders. Strong analytical, problem-solving skills and good communication skills Self-Starter with ability to work independently across multiple projects and set priorities Strong team player Proactive and solution oriented, able to guide junior team members. Execution knowledge of optimization techniques is a good-to-have Exact optimization Linear, Non-linear optimization techniques Evolutionary optimization Both population and search-based algorithms Cloud platform Certification, experience in Computer Vision are good-to-haves About Our Company | Accenture Qualification Experience: Minimum 5+ year(s) of experience is required Educational Qualification: Masters (MBA/MSc/MTech) from a Tier 1/Tier 2 and Engineering from Tier 1 school
Posted 1 week ago
13.0 - 18.0 years
13 - 17 Lacs
Bengaluru
Work from Office
Skill required: Tech for Operations - Artificial Intelligence (AI) Designation: AI/ML Computational Science Manager Qualifications: Any Graduation/Post Graduate Diploma in Management Years of Experience: 13 to 18 years Language - Ability: English(Domestic) - Advanced About Accenture Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song all powered by the worlds largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. Visit us at www.accenture.com What would you do You will be part of the Technology for Operations team that acts as a trusted advisor and partner to Accenture Operations. The team provides innovative and secure technologies to help clients build an intelligent operating model, driving exceptional results. We work closely with the sales, offering and delivery teams to identify and build innovative solutions.The Tech For Operations (TFO) team provides innovative and secure technologies to help clients build an intelligent operating model, driving exceptional results. Works closely with the sales, offering and delivery teams to identify and build innovative solutions. Major sub deals include AHO(Application Hosting Operations), ISMT (Infrastructure Management), Intelligent AutomationUnderstanding of foundational principles and knowledge of Artificial Intelligence AI including concepts, techniques, and tools in order to use AI effectively. What are we looking for Python (Programming Language)Python Software DevelopmentPySparkMicrosoft SQL ServerMicrosoft SQL Server Integration Services (SSIS)Ability to work well in a teamWritten and verbal communicationNumerical abilityResults orientation1 CL7 Data Engineers Roles and Responsibilities: In this role you are required to identify and assess complex problems for area of responsibility The person would create solutions in situations in which analysis requires an in-depth evaluation of variable factors Requires adherence to strategic direction set by senior management when establishing near-term goals Interaction of the individual is with senior management at a client and/or within Accenture, involving matters that may require acceptance of an alternate approach Some latitude in decision-making in involved you will act independently to determine methods and procedures on new assignments Decisions individual at this role makes have a major day to day impact on area of responsibility The person manages large - medium sized teams and/or work efforts (if in an individual contributor role) at a client or within Accenture Please note that this role may require you to work in rotational shifts Qualification Any Graduation,Post Graduate Diploma in Management
Posted 1 week ago
16.0 - 25.0 years
14 - 19 Lacs
Bengaluru
Work from Office
Skill required: Tech for Operations - Product Development Management Designation: AI/ML Computational Science Sr Manager Qualifications: Any Graduation Years of Experience: 16 to 25 years About Accenture Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song all powered by the worlds largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. Visit us at www.accenture.com What would you do You will be part of the Technology for Operations team that acts as a trusted advisor and partner to Accenture Operations. The team provides innovative and secure technologies to help clients build an intelligent operating model, driving exceptional results. We work closely with the sales, offering and delivery teams to identify and build innovative solutions.The Tech For Operations (TFO) team provides innovative and secure technologies to help clients build an intelligent operating model, driving exceptional results. Works closely with the sales, offering and delivery teams to identify and build innovative solutions. Major sub deals include AHO(Application Hosting Operations), ISMT (Infrastructure Management), Intelligent AutomationManage the end-to-end product development process from conception to design and production start-up, including the product structure design, engineering requirement process, multi-function resources collaboration and the engineering and supply chain integration. What are we looking for Experience in Product Management by applying Product Management principles Should have experience in Multiple domains in launching/acquiring new products/offerings Solid experience in working with client/customer management teams to achieve product objectivesShould have worked in envisioning, assessing, contracting, and onboarding products off the shelf for accelerating the goal of establishing a foothold.Work with other Product Managers and Functional Product owners to remove overlap and duplication of Functionality and Features across the organization.Decide on the prioritized features as per market, user, customer, and business requirements.Work closely with the Product Functional Owner to cull out the requirements for the functionality that is prioritized. Roles and Responsibilities: In this role you are required to identify and assess complex problems for area(s) of responsibility The individual should create solutions in situations in which analysis requires in-depth knowledge of organizational objectives Requires involvement in setting strategic direction to establish near-term goals for area(s) of responsibility Interaction is with senior management levels at a client and/or within Accenture, involving negotiating or influencing on significant matters Should have latitude in decision-making and determination of objectives and approaches to critical assignments Their decisions have a lasting impact on area of responsibility with the potential to impact areas outside of own responsibility Individual manages large teams and/or work efforts (if in an individual contributor role) at a client or within Accenture Please note that this role may require you to work in rotational shifts Qualification Any Graduation
Posted 1 week ago
15.0 - 20.0 years
15 - 19 Lacs
Gurugram
Work from Office
Project Role : Technology Architect Project Role Description : Design and deliver technology architecture for a platform, product, or engagement. Define solutions to meet performance, capability, and scalability needs. Must have skills : SAP Plant Maintenance (PM) Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Technology Architect, you will design and deliver technology architecture for a platform, product, or engagement. Your typical day will involve collaborating with various teams to define solutions that meet performance, capability, and scalability needs. You will engage in discussions to ensure that the architecture aligns with business objectives and technical requirements, while also addressing any challenges that arise during the development process. Your role will require you to stay updated with the latest technology trends and best practices to ensure that the solutions you propose are innovative and effective. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Evaluate and recommend new technologies that can improve system performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP Plant Maintenance (PM).- Strong understanding of technology architecture principles.- Experience with system integration and data flow management.- Ability to analyze and optimize system performance.- Familiarity with cloud technologies and their application in architecture design. Additional Information:- The candidate should have minimum 7.5 years of experience in SAP Plant Maintenance (PM).- This position is based at our Gurugram office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
5.0 - 8.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Job Title Security Delivery Senior Analyst Management Level: 10 - Senior Analyst Location: Bengaluru Must have skills: Node.js, PostgreSQL, AWS, Azure DevOps, Agile, CI/CD, Strong Communication, Estimation (for level 8/9) Good to have skills: Application Security, AWS Fargate, Google BigQuery Job Summary : The ISD backend developer will be responsible for writing code for the upcoming changes and operational tasks. The application is built in AWS Cloud Native architecture. The application is written in AngularJS, Node.js, both in TypeScript. The developer must be skilled in Node.js, PostgreSQL, AWS and be familiar with agile concepts and automated CI/CD including unit testing. Roles & Responsibilities: The backend developer will be responsible for supporting a custom-built dashboard in AngularJS and Node.js. Level 9/8 developers must prioritize work, estimate work, and assist other developers. The Developer will also take part in future project planning and estimation. Professional & Technical Skills: Technical Experience: The backend developer must be skilled in NodeJS and PostgreSQL and have working knowledge of TypeScript and PostgreSQL. The Developer must also be experienced in Continuous Integration/Continuous Deployment (CI/CD) to automate builds and deployments. Professional Experience: The backend developer must be self-motivated with excellent communication skills. The developer should be able to work with the lead(s) to solve complex development challenges, perform peer/quality reviews and maintain the teams code repository and deployment activities. Additional Information: About Our Company | AccentureQualification Experience: Minimum 4+ years of experience is required Educational Qualification: Any Degree
Posted 1 week ago
6.0 - 11.0 years
15 - 25 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Warm welcome from SP Staffing Services! Reaching out to you regarding permanent opportunity !! Job Description: Exp: 6-12 yrs Location: Hyderabad/Bangalore/Pune/Gurgaon Skill: GCP Data Engineer Interested can share your resume to sangeetha.spstaffing@gmail.com with below inline details. Full Name as per PAN: Mobile No: Alt No/ Whatsapp No: Total Exp: Relevant Exp in GCP: Rel Exp in Big Query: Rel Exp in Bigdata: Current CTC: Expected CTC: Notice Period (Official): Notice Period (Negotiable)/Reason: Date of Birth: PAN number: Reason for Job Change: Offer in Pipeline (Current Status): Availability for virtual interview on weekdays between 10 AM- 4 PM(plz mention time): Current Res Location: Preferred Job Location: Whether educational % in 10th std, 12th std, UG is all above 50%? Do you have any gaps in between your education or Career? If having gap, please mention the duration in months/year:
Posted 1 week ago
4.0 - 8.0 years
16 - 25 Lacs
Gurugram
Hybrid
Bachelors/Masters degree in Computer Science, Management of Information System or equivalent. 2+ years of experience in GCP - BigQuery, Dataproc, Dataflow. 4 or more years of relevant software engineering experience ( Big Data: Python, SQL, Hadoop, Hive, Spark) in a data-focused role. Strong experience in Big Data, Python, SQL, Spark and cloud exp (GCP/AWS/Azure). Experience in designing and building highly scalable and reliable data pipelines using Big Data ( Airflow, Python, Redshift/Snowflake ). Software development experience with proficiency in Python, Java, Scala, or another language. Good knowledge of Big Data querying tools, such as Hive, Experience with Spark/PySpark. Ability to analyse and obtain insights from complex/large data sets. Design and develop highly performing SQL Server database objects.
Posted 1 week ago
7.0 - 10.0 years
35 - 40 Lacs
Chennai
Hybrid
Program Manager Data - Chennai Forbes Advisor is a high-growth digital media and technology company that empowers consumers to make confident decisions about money, health, careers, and everyday life. Our global data organization builds modern, AI-augmented pipelines that turn information into revenue-driving insight. Job Description: Were hiring a Program Manager to orchestrate complex, cross-functional data initiatives from revenue-pipeline automation to analytics product launches. You'll be the connective tissue between Data Engineering, Analytics, RevOps, Product, and external partners, ensuring programs land on time, on scope, and with measurable impact. If you excel at turning vision into executable roadmaps, mitigating risk before it bites, and communicating clearly across technical and business audiences, wed love to meet you. Key Responsibilities: Own program delivery for multi-team data products (e.g., revenue-data pipelines, attribution models, partner-facing reporting APIs). Build and maintain integrated roadmaps, aligning sprint plans, funding, and resource commitments. Drive agile ceremonies (backlog grooming, sprint planning, retrospectives) and track velocity, burn-down, and cycle-time metrics. Create transparent status reportingrisks, dependencies, OKRstailored for engineers up to C-suite stakeholders. Proactively remove blockers by coordinating with Platform, IT, Legal/Compliance, and external vendors. Champion process optimization: intake, prioritization, change management, and post-mortems. Partner with RevOps and Media teams to ensure program outputs translate into revenue growth and faster decision making. Facilitate launch readinessQA checklists, enablement materials, go-live runbooks—so new data products land smoothly. Foster a culture of documentation, psychological safety, and continuous improvement within the data organization. Experience required: 7+ years program or project-management experience in data, analytics, SaaS, or high-growth tech. Proven success delivering complex, multi-stakeholder initiatives on aggressive timelines. Expertise with agile frameworks (Scrum/Kanban) and modern collaboration tools (Jira, Asana, Notion/Confluence, Slack). Strong understanding of data & cloud concepts (pipelines, ETL/ELT, BigQuery, dbt, Airflow/Composer). Excellent written and verbal communication—able to translate between technical teams and business leaders. Risk-management mindset: identify, quantify, and drive mitigation before issues escalate. Experience coordinating across time zones and cultures in a remote-first environment. Nice to Have Formal certification (PMP, PMI-ACP, CSM, SAFe, or equivalent). Familiarity with GCP services, Looker/Tableau, or marketing-data stacks (Google Ads, Meta, GA4). Exposure to revenue operations, performance marketing, or subscription/affiliate business models. Background in change-management or process-improvement methodologies (Lean, Six Sigma). Perks : Monthly long weekends—every third Friday off. Fitness and commute reimbursement. Remote-first culture with flexible hours and a high-trust environment. Opportunity to shape a world-class data platform inside a trusted global brand. Collaborate with talented engineers, analysts, and product leaders who value innovation and impact
Posted 1 week ago
5.0 - 10.0 years
0 - 0 Lacs
Hyderabad, Chennai
Hybrid
JD: Design and develop robust ETL pipelines using Python, PySpark, and GCP services. Build and optimize data models and queries in BigQuery for analytics and reporting. Ingest, transform, and load structured and semi-structured data from various sources. Collaborate with data analysts, scientists, and business teams to understand data requirements. Ensure data quality, integrity, and security across cloud-based data platforms. Monitor and troubleshoot data workflows and performance issues. Automate data validation and transformation processes using scripting and orchestration tools. Required Skills & Qualifications: Hands-on experience with Google Cloud Platform (GCP), especially BigQuery. Strong programming skills in Python and/or PySpark. Experience in designing and implementing ETL workflows and data pipelines. Proficiency in SQL and data modeling for analytics. Familiarity with GCP services such as Cloud Storage, Dataflow, Pub/Sub, and Composer. Understanding of data governance, security, and compliance in cloud environments. Experience with version control (Git) and agile development practices.
Posted 1 week ago
5.0 - 10.0 years
12 - 20 Lacs
Bengaluru
Work from Office
Senior Data Scientist Req number: R5797 Employment type: Full time Worksite flexibility: Hybrid Who we are CAI is a global technology services firm with over 8,500 associates worldwide and a yearly revenue of $1 billion+. We have over 40 years of excellence in uniting talent and technology to power the possible for our clients, colleagues, and communities. As a privately held company, we have the freedom and focus to do what is right—whatever it takes. Our tailor-made solutions create lasting results across the public and commercial sectors, and we are trailblazers in bringing neurodiversity to the enterprise. Job Summary We’re searching for an experienced Senior Data Scientist who excels at statistical analysis, feature engineering, and end to end machine learning operations. Your primary mission will be to build and productionize demand forecasting models across thousands of SKUs, while owning the full model lifecycle—from data discovery through automated re training and performance monitoring. This is a Full-time and Hybrid position. Job Description What You’ll Do Advanced ML Algorithms: Design, train, and evaluate supervised & unsupervised models (regression, classification, clustering, uplift). Apply automated hyperparameter optimization (Optuna, HyperOpt) and interpretability techniques (SHAP, LIME). Data Analysis & Feature Engineering: • Perform deep exploratory data analysis (EDA) to uncover patterns & anomalies. Engineer predictive features from structured, semistructured, and unstructured data; manage feature stores (Feast). Ensure data quality through rigorous validation and automated checks. TimeSeries Forecasting (Demand): • Build hierarchical, intermittent, and multiseasonal forecasts for thousands of SKUs. Implement traditional (ARIMA, ETS, Prophet) and deeplearning (RNN/LSTM, TemporalFusion Transformer) approaches. Reconcile forecasts across product/category hierarchies; quantify accuracy (MAPE, WAPE) and bias. MLOps & Model Lifecycle: • Establish model tracking & registry (MLflow, SageMaker Model Registry). Develop CI/CD pipelines for automated retraining, validation, and deployment (Airflow, Kubeflow, GitHub Actions). Monitor data & concept drift; trigger retuning or rollback as needed. Statistical Analysis & Experimentation: • Design and analyze A/B tests, causal inference studies, and Bayesian experiments. Provide statisticallygrounded insights and recommendations to stakeholders. Collaboration & Leadership: Translate business objectives into datadriven solutions; present findings to exec & nontech audiences. Mentor junior data scientists, review code/notebooks, and champion best practices. What You'll Need M.S. in Statistics (preferred) or related field such as Applied Mathematics, Computer Science, Data Science. 5+ years building and deploying ML models in production. Expertlevel proficiency in Python (Pandas, NumPy, SciPy, scikitlearn), SQL, and Git. Demonstrated success delivering largescale demandforecasting or timeseries solutions. Handson experience with MLOps tools (MLflow, Kubeflow, SageMaker, Airflow) for model tracking and automated retraining. Solid grounding in statistical inference, hypothesis testing, and experimental design. Experience in supplychain, retail, or manufacturing domains with highgranularity SKU data. Familiarity with distributed data frameworks (Spark, Dask) and cloud data warehouses (Big Query, Snowflake). Knowledge of deeplearning libraries (PyTorch, TensorFlow) and probabilistic programming (PyMC, Stan). Strong datavisualization skills (Plotly, Dash, Tableau) for storytelling and insight communication. Physical Demands This role involves mostly sedentary work, with occasional movement around the office to attend meetings, etc. Ability to perform repetitive tasks on a computer, using a mouse, keyboard, and monitor. Reasonable accommodation statement If you require a reasonable accommodation in completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employment selection process, please direct your inquiries to application.accommodations@cai.io or (888) 824 – 8111.
Posted 1 week ago
6.0 - 9.0 years
18 - 25 Lacs
Bangalore Rural, Bengaluru
Work from Office
ETL Tester,ETL/Data Migration Testing,AWS to GCP data migration, PostgreSQL, AlloyDB, Presto, BigQuery, S3, and GCS,Python for test automation,data warehousing and cloud-native tools,PostgreSQL to AlloyDB,Presto to BigQuery,S3 to Google Cloud Storage
Posted 1 week ago
4.0 - 9.0 years
20 - 35 Lacs
Hyderabad
Work from Office
Data Tester Job Description: We are looking for a skilled Data Tester to validate data accuracy and integrity across data warehouses, data integrations, and the curated business-ready layer. Key Responsibilities: • Test the data warehouse and ensure accurate data integration across various data sources. • Validate the Business-Ready dataset (BRD) , ensuring data is curated correctly for business use. • Develop and execute test cases to ensure data consistency and integrity in ETL/ELT processes. • Test data models and verify that data flows seamlessly through the data warehouse to the analytical layer. • Perform end-to-end testing for data integrations, ensuring compliance with business rules and transformation logic. • Collaborate with data engineers and business analysts to define testing strategies and resolve defects. • Conduct performance testing to ensure the scalability and efficiency of data pipelines. • Automate data validation and regression tests using relevant tools and frameworks. • Document and report test results, ensuring transparency and accountability. Required Skills: • Strong experience in data warehouse testing , validating ETL/ELT pipelines and data models. • Hands-on experience with testing tools and frameworks for data quality and integration. • Knowledge of SQL for querying and validating data in GBQ or other cloud-based platforms. • Strong attention to detail and analytical skills for identifying and resolving data issues. • Excellent collaboration skills to work with data engineers, analysts, and business teams. Preferred Skills: • Familiarity with Airflow for pipeline orchestration and Python for automating test cases. • Experience with metadata management and data governance processes. • Any other automation tools
Posted 1 week ago
8.0 - 13.0 years
0 - 1 Lacs
Bengaluru
Remote
Role & responsibilities Basic qualifications: 8+ years of experience with Java, including building complex, scalable applications. 6+ years of experience in Spring Boot, including designing and implementing advanced microservices architectures. 4+ years of GCP and Big Query experience Contribute to IS Projects; Conducts systems and requirements analyses to identify project action items. Perform Analysis and Design; participates in defining and developing technical specifications to meet systems requirements. Design and Develop Moderate to Highly Complex Applications; Analyzes, designs, codes, tests, corrects, and documents moderate to highly complex programs to ensure optimal performance and compliance. Develop Application Documentation; Develops and maintains system documentation to ensure accuracy and consistency. Produce Integration Builds; Defines and produces integration builds to create applications. Performs Maintenance and Support; Defines and administers procedures to monitor systems performance and integrity. Support Emerging Technologies and Products; Monitors the industry to gain knowledge and understanding of emerging technologies. Must have GCP and Big Query experience Should have Power BI, Microservice Architecture, SQL Server, DB2, Spring Boot, JSON, Java, C#, AMQP, AzureAD, HTTP, readme documentation. Should be proficient in GIT, Scrum, and Azure DevOps
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
You are a skilled GCP Data Engineer with over 6 years of experience and expertise in SQL and Python coding. Your primary responsibility will be to design, build, and optimize data pipelines and ETL workflows on Google Cloud Platform (GCP), with a focus on BigQuery. You must have a solid understanding of data engineering best practices and work collaboratively with data scientists, analysts, and other stakeholders to deliver high-quality data solutions. Your key responsibilities will include working extensively with BigQuery for data modeling, transformation, and analysis, as well as writing efficient SQL queries and Python scripts for data processing. You will be expected to ensure data quality, performance, and security across pipelines. To excel in this role, you must possess strong SQL and Python coding skills, with at least 3 years of experience working with GCP services, particularly BigQuery. A good understanding of data warehouse concepts and cloud-native data processing is essential. Experience with Airflow or other workflow orchestration tools will be considered a plus. Additionally, you should have excellent problem-solving and communication skills to effectively collaborate with cross-functional teams.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough