Jobs
Interviews

3301 Big Data Jobs - Page 14

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

jaipur, rajasthan

On-site

Job Description As an advertising technology focused software engineer at Octillion Media, you will be responsible for designing, implementing, and managing end-to-end data pipelines to ensure easy accessibility of data for analysis. Your role will involve integrating with third-party APIs for accessing external data, creating and maintaining data warehouses for reporting and analysis purposes, and collaborating with engineering and product teams to execute data-related product initiatives. You will also be tasked with evaluating existing tools/solutions for new use cases and building new ones if necessary. Your willingness to take end-to-end ownership and be accountable for the product's success will be crucial in this role. You should have a minimum of 3 years of experience in a Data Engineering role and possess the ability to write clean and structured code in SQL, bash scripts, and Python (or similar languages). A solid understanding of database technologies, experience in building automated, scalable, and robust data processing systems, and familiarity with ETL and data warehouse systems such as Athena/Bigquery are essential qualifications for this position. Additionally, experience in working with large scale quantitative data using technologies like Spark, as well as the ability to quickly resolve performance and system incidents, will be advantageous. Having experience with Big Data/ML and familiarity with RTB, Google IMA SDK, VAST, VPAID, and Header Bidding will be considered a plus. Previous experience in product companies would also be beneficial for this role. If you are looking to join a dynamic team at Octillion Media and contribute to cutting-edge advertising technology solutions, we encourage you to apply. Your information will be handled confidentially in accordance with EEO guidelines.,

Posted 1 week ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Pune

Hybrid

Our client is Global IT Service & Consulting Organization Data Software Engineer Location- Pune Notice period: Immediate to 60 days F2F interview on 27th July ,Sunday in Pune location Exp:5 -12 years Skill: Python, Spark, Azure Databricks/GCP/AWS Data Software Engineer - Spark, Python, (AWS, Kafka or Azure Databricks or GCP) Job Description: 5-12 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Experience with messaging systems, such as Kafka or RabbitMQ Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS or AZURE Databricks Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology

Posted 1 week ago

Apply

4.0 - 6.0 years

10 - 14 Lacs

Hyderabad

Work from Office

In this role, you will design, build and maintain data lake solutions for scientific data that drive business decisions for Research. You will build scalable and high-performance data engineering solutions for large scientific datasets and collaborate with Research stakeholders. The ideal candidate possesses experience in the pharmaceutical or biotech industr y , demonstrates strong technical skills, is proficient with big data technologies, and has a deep understanding of data architecture and ETL processes. Roles & Responsibilities: D esign, develop, and implement data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manag e scope, timelines, and risks Develop and maintain data models for biopharma scientific data , data dictionaries, and other documentation to ensure data accuracy and consistency Optimize large datasets for query performance Collaborate with global cross-functional teams including research scientists to understand data requirements and design solutions that meet business needs Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs , Software Engineers and Data Scientists to design and develop end-to-end data pipeline s to meet fast paced business need s across geographic regions Identify and resolve [ complex ] data-related challenges Adhere to best practices for coding, testing , and designing reusable code/component E xplore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Maintain comprehensive documentation of processes, systems, and solutions Basic Qualifications and Experience: Doctorate Degree OR Master s degree with 4 - 6 years of experience in Computer Science, IT , Computational Chemistry, Computational Biology/ Bioinformatics or related field OR Bachelor s degree with 6 - 8 years of experience in Computer Science, IT , Computational Chemistry, Computational Biology/ Bioinformatics or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT , Computational Chemistry, Computational Biology/ Bioinformatics or related field Preferred Qualifications and Experience: 3+ years of experience in implementing and supporting biopharma scientific research data analytics (software platforms) Functional Skills: Must-Have Skills: Proficiency in SQL and Python for data engineering, test automation frameworks ( pytest ), and scripting tasks Hands on experience with big data technologies and platforms , such as Databricks, Apache Spark ( PySpark , SparkSQL ) , workflow orchestration, performance tuning on big data processing Excellent problem-solving skills and the ability to work with large, complex datasets Good-to-Have Skills: A passion for tackling complex challenges in drug discovery with technology and data Strong understanding of data modeling, data warehousing, and data integration concepts Strong experience using RDBMS ( e.g. Oracle, MySQL , SQL server , Postgre SQL ) Knowledge of cloud data platforms (AWS preferred) E xperience with data visualization tools (e . g. Dash, Plotly , Spotfire ) Experience with diagramming and collaboration tools such as Miro, Lucidchart or similar tools for process mapping and brainstorming Experience writing and maintaining technical documentation in Confluence U nderstanding of data governance frameworks, tools, and best practices Professional Certifications: Databricks Certified Data Engineer Professional preferred Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills EQUAL OPPORTUNITY STATEMENT We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation . Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. .

Posted 1 week ago

Apply

1.0 - 3.0 years

14 - 16 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. In this vital role you will be responsible for designing, building, maintaining , analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Basic Qualifications : Master s degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelor s degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Must have Skills : Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark ( PySpark , SparkSQL ), workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools ( eg. SQL) Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Proven ability to optimize query performance on big data platforms Preferred Qualifications: Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Knowledge of Python/R, Databricks, cloud data platforms Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Professional Certifications: AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills : Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards.

Posted 1 week ago

Apply

4.0 - 8.0 years

10 - 14 Lacs

Gurugram

Work from Office

Overview We are seeking a self-driven Senior Tableau Engineer with deep expertise in data modeling, visualization design, and BI-tool migrations. Youll own end-to-end dashboard development, translate complex healthcare and enterprise data into actionable insights, and lead migrations from legacy BI platforms (e.g., MicroStrategy, BusinessObjects) to Tableau. Job Location - Delhi NCR / Bangalore /Pune Key Responsibilities Data Modeling & Architecture Design and maintain logical and physical data models optimized for Tableau performance. Collaborate with data engineers to define star/snowflake schemas, data marts, and semantic layers. Ensure data integrity, governance, and lineage across multiple source systems. Visualization Development Develop high-impact, interactive Tableau dashboards and visualizations for executive-level stakeholders. Apply design best practices: color theory, UX principles, and accessibility standards. Optimize workbooks for performance (efficient calculations, extracts, and queries) BI Migration & Modernization Lead migration projects from MicroStrategy, BusinessObjects, or other BI tools to Tableau. Reproduce and enhance legacy reports in Tableau, ensuring feature parity and improved UX. Validate data accuracy post-migration through sampling, reconciliation, and automated testing. Automation & Deployment Automate data extract refreshes, alerting, and workbook publishing via Tableau Server/Online. Implement CI/CD processes for Tableau content using Git, Tableau APIs, and automated testing frameworks. Establish standardized naming conventions, folder structures, and content lifecycle policies. Collaboration & Mentorship Partner with analytics translators, data engineers, and business owners to gather requirements and iterate solutions. Mentor junior BI developers on Tableau best practices, performance tuning, and dashboard design. Evangelize self-service BI adoption: train users, develop documentation, and host office hours. Governance & Quality Define and enforce Tableau governance: security, permissions, version control, and change management. Implement data quality checks and monitoring for dashboards (row counts, anomalies, thresholds). Track and report key metrics on dashboard usage, performance, and user satisfaction.

Posted 1 week ago

Apply

14.0 - 18.0 years

20 - 35 Lacs

Hyderabad

Work from Office

JOB DESCRIPTION: Job Title: BI Lead (Sr. ERP Consultant) Job Location: Hyderabad Experience: 13+ yrs of relevant experience Job Type: Full Time NP: Immediate to 30 Days Objectives and Responsibilities of the BI Lead: (A) As Business Intelligence Architecture: Over 15+ years of experience with architecting the data pipelines or data ingestion for both Batch/Streaming data from different sources to data warehouse/data lake; 7+ years working in AWS cloud Experience in designing and delivering data warehousing and analytics projects, including using cloud technologies such as EMR, Lambda, Cloud Storage, BigQuery, etc. Experience of at least 8 to 10 large projects building and optimising BigData pipelines, architectures and datasets Experience working on cloud platforms (AWS/Azure/GCP) and its architecture for data solutions Hands-on experience with the Big Data stack (HDFS, SPARK, MapReduce, Hadoop, Sqoop, Pig, Hive, Hbase, Flume, Kafka and emerging technologies) Minimum of 5 years in Solution development experience with at least 3 years of Solution architecture Experience of working in HTML 5 , CSS3, PHP, Node JS Strong skillsets with web services including REST, SOAP, and API management. Experience with enterprise Data Integration technologies including ETL, XSLT, XML and JSON Working familiarity with web services including REST, SOAP, and API management. Working knowledge of Chat API and integrating it with external channels Experience in working with databases like SQL Server & writing complex SQL queries. Experience integrating custom and package products across complex enterprise environments. Experience with RedHat Linux, Windows Server Excellent Communication skills Deep analytical and problem-solving skill Project & resource management skills (B) As Business Intelligence Solution Manager: Propose architectural solutions to help move and improve infra and data from on-premise to cloud; experience in preparing tech specs document for large scale, complex architectures sol. Effectively strategies the migration of clients data using Google Cloud, AWS or other cloud technologies. Provide advisory and thought leadership on the provision of analytics environments leveraging Cloud-based platforms, big data technologies, including integration with existing data and analytics platforms and tools. Design and implement scalable data architectures leveraging Big Query, Hadoop, NoSQL and emerging technologies, covering on-premise and cloud-based deployment patterns. Provide Consulting and Solution support to customers during their data warehouse or data lake modernisation, especially in design and implementation of data-centric architectures. Proactively identify niche opportunity areas within data and analytics framework, and drive client discussions delivering presentations, demos and proofs-of-concept to showcase capabilities and transformation solutions. Manage team and handle delivery of 8-10 projects Be a thought leader around all things Data, reviewing current and future needs alongside the Executive Team. Strong leadership with the ability to collaborate across the organisation working with emerging technologies and guiding teams to deliver complex data solutions, standards, architectural governance, design patterns, and practices. Qualifications & Mandatory Skills: Bachelors / Master’s degree in computer science, Engineering or related field. Experience in R, python. 8+ years of experience on Azure, AWS or Google Cloud. 5+ years of experience with Big Data / Hadoop / Data Ingestion (Apache Spark, Hadoop, or Kafka). 5+ years of experience on metadata, governance. Hands-on experience in one or more Object-Oriented Programming languages (Java, Scala, Python) 10+ years of experience with Agile Methodologies Scrum, SAFe or Kanban. Ability to define data model and data storage strategies, including knowledge of distributed data systems. Expert level in building big data solutions. Hands-on experience building cloud scalable, real time and high-performance data lake solutions using AWS, Azure, EMR, S3, Hive & Spark, Athena Expertise in an agile and iterative model Expert level in relational SQL Experience with scripting languages such as Shell, Python, R and emerging technologies Experience with source control tools such as GitHub and related dev process Experience in Engineering and Constructions is preferable. Experience with RDBMS including but not limited to MySQL, PostgreSQL, SQL Server, Oracle Solid knowledge of the SDLC and enforcement of standard software development practices. Ability to review and critique code and proposed designs and offer thoughtful feedback in a collegial fashion. Proven history of self-direction, creativity, and ability to meet deadlines. Skilled in writing and communication- able to craft needed messages so they are clearly expressed and easily understood. Ability to work well with the other Technical Leads and foster an environment of collaboration and learning.

Posted 1 week ago

Apply

3.0 - 7.0 years

11 - 15 Lacs

Gurugram

Work from Office

Overview We are seeking an experienced Data Modeller with expertise in designing and implementing data models for modern data platforms. This role requires deep knowledge of data modeling techniques, healthcare data structures, and experience with Databricks Lakehouse architecture. The ideal candidate will have a proven track record of translating complex business requirements into efficient, scalable data models that support analytics and reporting needs. About the Role As a Data Modeller, you will be responsible for designing and implementing data models for our Databricks-based Modern Data Platform. You will work closely with business stakeholders, data architects, and data engineers to create logical and physical data models that support the migration from legacy systems to the Databricks Lakehouse architecture, ensuring data integrity, performance, and compliance with healthcare industry standards. Key Responsibilities Design and implement logical and physical data models for Databricks Lakehouse implementations Translate business requirements into efficient, scalable data models Create and maintain data dictionaries, entity relationship diagrams, and model documentation Develop dimensional models, data vault models, and other modeling approaches as appropriate Support the migration of data models from legacy systems to Databricks platform Collaborate with data architects to ensure alignment with overall data architecture Work with data engineers to implement and optimize data models Ensure data models comply with healthcare industry regulations and standards Implement data modeling best practices and standards Provide guidance on data modeling approaches and techniques Participate in data governance initiatives and data quality assessments Stay current with evolving data modeling techniques and industry trends Qualifications Extensive experience in data modeling for analytics and reporting systems Strong knowledge of dimensional modeling, data vault, and other modeling methodologies Experience with Databricks platform and Delta Lake architecture Expertise in healthcare data modeling and industry standards Experience migrating data models from legacy systems to modern platforms Strong SQL skills and experience with data definition languages Understanding of data governance principles and practices Experience with data modeling tools and technologies Knowledge of performance optimization techniques for data models Bachelor's degree in Computer Science, Information Systems, or related field; advanced degree preferred Professional certifications in data modeling or related areas Technical Skills Data modeling methodologies (dimensional, data vault, etc.) Databricks platform and Delta Lake SQL and data definition languages Data modeling tools (erwin, ER/Studio, etc.) Data warehousing concepts and principles ETL/ELT processes and data integration Performance tuning for data models Metadata management and data cataloging Cloud platforms (AWS, Azure, GCP) Big data technologies and distributed computing Healthcare Industry Knowledge Healthcare data structures and relationships Healthcare terminology and coding systems (ICD, CPT, SNOMED, etc.) Healthcare data standards (HL7, FHIR, etc.) Healthcare analytics use cases and requirements Optionally Healthcare regulatory requirements (HIPAA, HITECH, etc.) Clinical and operational data modeling challenges Population health and value-based care data needs Personal Attributes Strong analytical and problem-solving skills Excellent attention to detail and data quality focus Ability to translate complex business requirements into technical solutions Effective communication skills with both technical and non-technical stakeholders Collaborative approach to working with cross-functional teams Self-motivated with ability to work independently Continuous learner who stays current with industry trends What We Offer Opportunity to design data models for cutting-edge healthcare analytics Collaborative and innovative work environment Competitive compensation package Professional development opportunities Work with leading technologies in the data space This position requires a unique combination of data modeling expertise, technical knowledge, and healthcare industry understanding. The ideal candidate will have demonstrated success in designing efficient, scalable data models and a passion for creating data structures that enable powerful analytics and insights.

Posted 1 week ago

Apply

2.0 - 5.0 years

12 - 13 Lacs

Bengaluru

Work from Office

Location: Bengaluru Designation: Consultant Entity: Deloitte Touche Tohmatsu India LLP Y our potential, unleashed. India s impact on the global economy has increased at an exponential rate and Deloitte presents an opportunity to unleash and realize your potential amongst cutting edge leaders, and organizations shaping the future of the region, and indeed, the world beyond. At Deloitte, your whole self to work, every day. Combine that with our drive to propel with purpose and you have the perfect playground to collaborate, innovate, grow, and make an impact that matters. The Team Deloitte s Technology & Transformation practice can help you uncover and unlock the value buried deep inside vast amounts of data. Our global network provides strategic guidance and implementation services to help companies manage data from disparate sources and convert it into accurate, actionable information that can support fact-driven decision-making and generate an insight-driven advantage. Our practice addresses the continuum of opportunities in business intelligence & visualization, data management, performance management and next-generation analytics and technologies, including big data, cloud, cognitive and machine learning. Job Description: Genesys Pure cloud (ININ) solutions Architecture, development, Implementation, Configuration and support. Candidate should be able to design and implement Genesys pure cloud environment configuration, implementation and deployment to production. Candidate is expected to work with their internal telecom, infra groups to support Pure cloud implementation. Ensure Necessary technical help and support for Genesys Pure cloud implementation. Candidate is expected to liaise with Business users, call center groups for Pure cloud implementation and support. Participate in business stake holder meetings, orientation and ongoing training of new IT staff. How you ll grow Connect for impact Our exceptional team of professionals across the globe are solving some of the world s most complex business problems, as well as directly supporting our communities, the planet, and each other. Know more in our Global Impact Report and our India Impact Report . Empower to lead You can be a leader irrespective of your career level. Our colleagues are characterised by their ability to inspire, support, and provide opportunities for people to deliver their best and grow both as professionals and human beings. Know more about Deloitte and our One Young World partnership. Inclusion for all At Deloitte, people are valued and respected for who they are and are trusted to add value to their clients, teams and communities in a way that reflects their own unique capabilities. Know more about everyday steps that you can take to be more inclusive. At Deloitte, we believe in the unique skills, attitude and potential each and every one of us brings to the table to make an impact that matters. Drive your career At Deloitte, you are encouraged to take ownership of your career. We recognise there is no one size fits all career path, and global, cross-business mobility and up / re-skilling are all within the range of possibilities to shape a unique and fulfilling career. Know more about Life at Deloitte. Everyone s welcome entrust your happiness to us Our workspaces and initiatives are geared towards your 360-degree happiness. This includes specific needs you may have in terms of accessibility, flexibility, safety and security, and caregiving. Here s a glimpse of things that are in store for you. Interview tips We want job seekers exploring opportunities at Deloitte to feel prepared, confident and comfortable. To help you with your interview, we suggest that you do your research, know some background about the organisation and the business area you re applying to. Check out recruiting tips from Deloitte professionals.

Posted 1 week ago

Apply

0.0 - 4.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Jul 14, 2025 Location: Bengaluru Designation: Analyst Entity: Deloitte Touche Tohmatsu India LLP Y our potential, unleashed. India s impact on the global economy has increased at an exponential rate and Deloitte presents an opportunity to unleash and realize your potential amongst cutting edge leaders, and organizations shaping the future of the region, and indeed, the world beyond. At Deloitte, your whole self to work, every day. Combine that with our drive to propel with purpose and you have the perfect playground to collaborate, innovate, grow, and make an impact that matters. The Team Deloitte s Technology & Transformation practice can help you uncover and unlock the value buried deep inside vast amounts of data. Our global network provides strategic guidance and implementation services to help companies manage data from disparate sources and convert it into accurate, actionable information that can support fact-driven decision-making and generate an insight-driven advantage. Our practice addresses the continuum of opportunities in business intelligence & visualization, data management, performance management and next-generation analytics and technologies, including big data, cloud, cognitive and machine learning. Job Description: Managing project management activities for technical workstreams with technical acumen and understanding of basic technology , design elements etc. The candidate should have : Ability to translate technical or complex offerings into client-friendly language Excellent verbal and written communication Strong presentation and interpersonal skills Ability to listen actively and understand client needs Confident and clear articulation of ideas How you ll grow Connect for impact Our exceptional team of professionals across the globe are solving some of the world s most complex business problems, as well as directly supporting our communities, the planet, and each other. Know more in our Global Impact Report and our India Impact Report . Empower to lead You can be a leader irrespective of your career level. Our colleagues are characterised by their ability to inspire, support, and provide opportunities for people to deliver their best and grow both as professionals and human beings. Know more about Deloitte and our One Young World partnership. Inclusion for all At Deloitte, people are valued and respected for who they are and are trusted to add value to their clients, teams and communities in a way that reflects their own unique capabilities. Know more about everyday steps that you can take to be more inclusive. At Deloitte, we believe in the unique skills, attitude and potential each and every one of us brings to the table to make an impact that matters. Drive your career At Deloitte, you are encouraged to take ownership of your career. We recognise there is no one size fits all career path, and global, cross-business mobility and up / re-skilling are all within the range of possibilities to shape a unique and fulfilling career. Know more about Life at Deloitte. Everyone s welcome entrust your happiness to us Our workspaces and initiatives are geared towards your 360-degree happiness. This includes specific needs you may have in terms of accessibility, flexibility, safety and security, and caregiving. Here s a glimpse of things that are in store for you. Interview tips We want job seekers exploring opportunities at Deloitte to feel prepared, confident and comfortable. To help you with your interview, we suggest that you do your research, know some background about the organisation and the business area you re applying to. Check out recruiting tips from Deloitte professionals.

Posted 1 week ago

Apply

5.0 - 9.0 years

11 - 16 Lacs

Mumbai

Work from Office

Manage day-to day operational activity, ensure adherence to SLA and TAT. Ensure that the processing is done in compliance with laid down processes and is in line with department s policies. Develop, implement, and monitor day-to-day operational systems and processes that provide visibility into goals and progress for our key initiatives. Manage on time and accurate Data reporting, expert in creating visually appealing, persuasive and effective presentation. Plan, monitor, and analyze key metrics for the day-to-day performance of the operations team to ensure efficient and timely completion of tasks Uphold organization policies and standards, ensuring regulations are followed Independent and resourceful with the ability to identify opportunities to optimize performance Strong working knowledge of big data, data analysis and performance metrics Proven ability to plan and manage operational process for maximum efficiency and productivity Project Management: Devise strategies to ensure growth of programs enterprise-wide, identifying and implementing process improvements that will maximize output and minimize costs Set up (configure), test and deliver batch solution requests (for new solutions as well as changes to existing) to customers in a timely manner ensuring that appropriate standards are followed and customer needs are met I nterdepartmental Co-ordination: Build and maintain relationships with all department heads, external partners, and vendors to make decisions regarding operational activity and strategic goals Work with the Solution consulting team in interacting with clients to gather detailed business requirements; facilitate communication with clients and sales in regards to project progress and investigations Communicate customer issues with operations team and devise ways of improving the customer experience, including resolving problems and complaints Provide consulting during proposal phase led by Sales to secure the sale Impact Youll Make: Experience and Skills Qualification: Master degree in business administration, preferably in the Financial Services industry. Minimum 5+ years of relevant experience. Hands on experience in managing operational processes. Strong working knowledge of big data, data analysis, Linux/Unix, SQL and performance metrics. Proven ability to plan and manage operational process for maximum efficiency and productivity. Strong working knowledge of industry regulations and legislative guidelines. Ability to analyze moderate to complex data using logic and quantitative reasoning, and an intuitive capacity for problem solving. Flexibility to travel as needed. Executive presence and assertiveness. Self-starter, ability to work independently, handle ambiguous situations and exercise judgement in variety of situations. Strong communication, organizational, verbal & written skills. High degree of responsibility and ownership, strong multitasking, coordination and tenaciously looking for ways to get results. Essential Competencies Ability to build trusting relationships - across all levels and in the immediate / extended team internationally, should be known and regarded as a trusted competent advisor. Driving innovation A believer in continuous improvement of services, processes and operational efficiency. Demonstrates curiosity and critical thinking. Business Acumen Spends time to ensure understanding of the business and aligns accordingly. Change agent Ability to diagnose correctly, design and execute interventions. Ensures communication through appropriate channels in a concise and proactive manner. Execution champion Focuses and ensures closure without compromising on quality of the output. Raises / flags issues as necessary and moves forward with a solutioning approach This is a hybrid position and involves regular performance of job responsibilities virtually as well as in-person at an assigned TU office location for a minimum of two days a week. TransUnion Job Title Specialist III, Batch Processing

Posted 1 week ago

Apply

7.0 - 12.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Role: Lead Data Scientist Location: Bengaluru What you ll do We re MiQ, a global programmatic media partner for marketers and agencies. Our people are at the heart of everything we do, so you will be too. No matter the role or the location, we re all united in the vision to lead the programmatic industry and make it better. As a Lead Data Scientist in our data science department, you ll have the chance to: Drive building of end-to-end AI/ML based Ad-Tech solutions with a high performing team Own projects and processes and make sure you deliver with high quality Collaborate with wider capabilities and functions for a collective success Drive innovation in the Data Science team by continuously upskill, research, apply, and then inculcate innovative practices within the team Who are your stakeholders The Team Lead s internal customers / stakeholders include: Account Managers / Traders / Sales / Analysts: They drive building solutions that are used primarily by these internal customers through MiQ proprietary platforms like Lab and Hub. The Team Lead usually does not directly collaborate with these customers, and the market requirements and solution feedback is through Product Management team Product Management: The Team Lead drives development of solutions in his team, prioritised as part of the Product roadmap. Engineering Teams: Work collaboratively with the engineering teams to Program Management: They work with program managers to make sure the right project delivery processes are set and followed What you ll bring 7+ years of experience in solving data science problems using machine learning, data mining algorithms and big data tools. Strong experience with advanced SQL and good experience in big data ecosystem - Spark, Hive / Pig / MapReduce Experience of delivering at least one product with involvement in business problem identification, proposing solutions and evaluating them, identifying required data sources, building data pipelines, visualising the outputs and taking actions based on the data outputs. Strong Experience with at-least one programming language - e.g. Java, Python, R (Exposure to multiple is a plus) Storing Experience in delivering data science projects leveraging cloud infrastructure. Experience with Agile framework. Experience of leading a data science team is a plus. Highly passionate about making an impact on business using data science. Believes in continuous learning and sharing knowledge with peers. We ve highlighted some key skills, experience and requirements for this role. But please don t worry if you don t meet every single one. Our talent team strives to find the best people. They might see something in your background that s a fit for this role or another opportunity at MiQ. If you have a passion for the role, please still apply. What impact will you create You would drive the development of end-to-end data-driven / AI / ML / Statistics based Ad-Tech solutions that positions MiQ as the leading programmatic partner for advertisers across the lifecycle of a marketing campaign, which consequently contributes to MiQ s Revenue and Gross Profit. You will own your team s quarterly and yearly deliverables, making sure the roadmap solutions produced are of high quality and driving business impact. They are always looking for opportunities to help the team deliver solutions that would differentiate MiQ from competitors by researching & brainstorming on cutting edge AI techniques and technologies. You would drive team focus on ML/AI solutions that scale - from tech as well as functional point of view - with application of software development best practices and ML Ops so as to transition solutions smoothly to production and be hosted on MiQ s proprietary platforms. You would actively contribute to building the MiQ Brand of Data Science by bringing innovation and research to focus internally and externally, through submitting white papers, representing MiQ and presenting at conferences, participating in hackathons etc. What s in it for you Our Center of Excellence is the very heart of MiQ, and it s where the magic happens. It means everything you do and everything you create will have a huge impact on our entire global business. MiQ is incredibly proud to foster a welcoming culture. We do everything possible to make sure everyone feels valued for what they bring. With global teams committed to diversity, equity, and inclusion, we re always moving towards becoming an even better place to work. Values Our values are so much more than statements . They unite MiQers in every corner of the world. They shape the way we work and the decisions we make. And they inspire us to stay true to ourselves and to aim for better. Our values are there to be embraced by everyone so that we naturally live and breathe them. Just like inclusivity, our values flow through everything we do - no matter how big or small. We do what we love - Passion We figure it out - Determination We anticipate the unexpected - Agility We always unite - Unite We dare to be unconventional - Courage Benefits Every region and office has specific perks and benefits, but every person joining MiQ can expect: A hybrid work environment New hire orientation with job-specific onboarding and training Internal and global mobility opportunities Competitive healthcare benefits Bonus and performance incentives Generous annual PTO paid parental leave, with two additional paid days to acknowledge holidays, cultural events, or inclusion initiatives. Employee resource groups are designed to connect people across all MiQ regions, drive action, and support our communities. Apply today! Equal Opportunity Employer

Posted 1 week ago

Apply

18.0 - 20.0 years

12 - 16 Lacs

Hyderabad

Work from Office

Roles and Responsibilities The role of Delivery Manager is crucial for ensuring the successful delivery of projects and services to clients. It requires a combination of leadership, project management, communication, and technical skills. This position is responsible for overseeing the delivery of projects and services to clients, ensuring that they meet the agreed-upon quality standards, timelines, and budgets. Below is an overview of the key responsibilities, required skills, and qualifications for the role of Delivery Manager. Leading a large team with complete P&L ownership, servicing multiple engagements with varied offerings. Actively supporting sales teams in new logo acquisition and solutions teams in designing innovative solutions to business needs. Networking with senior stakeholders in the region and positioning the company as a partner of choice. Lead the overall strategy for Engineering teams in India and oversee future growth. Leading pricing strategies, contract renewals and driving P&L in long term deals. Monitor risks throughout the project lifecycle and adjust plans as necessary. Strong organizational and multitasking abilities. Qualifications Required Minimum 18 - 20 years of experience in IT Product Services & from Computer Engineering / Computer Science background. Skills and Experience Required Must have handled large projects in cloud applications, with different verticals / industries. Must have grown & driven offshore and Headcount> 200 employees in offshore. Proven experience in high volume ramp-ups of teams. Proven & experience in client management, coordination, and negotiation skills. Must have the ability to connect closely with the customer, mine and penetrate customers. Cross sell enhancements or value adds to bring in new business from existing clients. Proven experience in Delivery Management for cross& next-gen skills and projects, using methodologies such as Agile (SCRUM, Feature Driven, Lean, etc.) Preferred expertise in Java, Spring, Hibernate, Web Services, Cloud, Microservices Must be well-read/versed in NextGen technologies such as Digital, Cloud, Analytics, Big Data, AI-ML etc. Proven experience in Pre-Sales & Solutioning for deals, RFPs/RFIs, proposals, etc. Strong P&L & Operational Management experience. Strong People Management, Mentoring and leadership skills. Excellent written, spoken and presentation skills.

Posted 1 week ago

Apply

8.0 - 13.0 years

12 - 16 Lacs

Gurugram

Remote

The Data engineer is responsible for managing and operating upon Tableau, Tableau bridge server, Databricks, Dbt, SSRS, SSIS, AWS DWS, AWS APP Flow, PowerB I. The engineer will work closely with the customer and team to manage and operate cloud data platform. Job Description: Provides Level 3 operational coverage: Troubleshooting incident/problem, includes collecting logs, cross-checking against known issues, investigate common root causes (for example failed batches, infra related items such as connectivity to source, network issues etc.) Knowledge Management: Create/update runbooks as needed / Entitlements Governance: Watch all the configuration changes to batches and infrastructure (cloud platform) along with mapping it with proper documentation and aligning resources. Communication: Lead and act as a POC for customer from off-site, handling communication, escalation, isolating issues and coordinating with off-site resources while level setting expectation across stakeholders Change Management: Align resources for on-demand changes and coordinate with stakeholders as required Request Management: Handle user requests if the request is not runbook-based create a new KB or update runbook accordingly Incident Management and Problem Management, Root cause Analysis, coming up with preventive measures and recommendations such as enhancing monitoring or systematic changes as needed. SKILLS Good hands on Tableau, Tableau bridge server, Databricks, Dbt, SSRS, SSIS, AWS DWS, AWS APP Flow, PowerB I. Ability to read and write sql and stored procedures. Good hands on experience in configuring, managing and troubleshooting along with general analytical and problem solving skills. Excellent written and verbal communication skills. Ability to communicate technical info and ideas so others will understand. Ability to successfully work and promote inclusiveness in small groups. JOB COMPLEXITY: This role requires extensive problem solving skills and the ability to research an issue, determine the root cause, and implement the resolution; research of various sources such as databricks/AWS/tableau documentation that may be required to identify and resolve issues. Must have the ability to prioritize issues and multi-task. EXPERIENCE/EDUCATION: Requires a Bachelors degree in computer science or other related field plus 8+ years of hands-on experience in configuring and managing AWS/tableau and databricks solution. Experience with Databricks and tableau environment is desired.

Posted 1 week ago

Apply

10.0 - 15.0 years

12 - 16 Lacs

Gurugram

Remote

Job Summary The Data engineer is responsible for managing and operating upon Tableau, Tableau bridge server, Databricks, Dbt, SSRS, SSIS, AWS DWS, AWS APP Flow, PowerB I. The engineer will work closely with the customer and team to manage and operate cloud data platform. Job Description Provides Level 3 operational coverage: Troubleshooting incident/problem, includes collecting logs, cross-checking against known issues, investigate common root causes (for example failed batches, infra related items such as connectivity to source, network issues etc.) Knowledge Management: Create/update runbooks as needed / Entitlements Governance: Watch all the configuration changes to batches and infrastructure (cloud platform) along with mapping it with proper documentation and aligning resources. Communication: Lead and act as a POC for customer from off-site, handling communication, escalation, isolating issues and coordinating with off-site resources while level setting expectation across stakeholders Change Management: Align resources for on-demand changes and coordinate with stakeholders as required Request Management: Handle user requests if the request is not runbook-based create a new KB or update runbook accordingly Incident Management and Problem Management, Root cause Analysis, coming up with preventive measures and recommendations such as enhancing monitoring or systematic changes as needed. KNOWLEDGE/SKILLS/ABILITY Good hands-on Tableau, Tableau bridge server, Databricks, Dbt, SSRS, SSIS, AWS DWS, AWS APP Flow, PowerB I. Ability to read and write sql and stored procedures. Good hands-on experience in configuring, managing and troubleshooting along with general analytical and problem-solving skills. Excellent written and verbal communication skills. Ability to communicate technical info and ideas so others will understand. Ability to successfully work and promote inclusiveness in small groups. JOB COMPLEXITY: This role requires extensive problem-solving skills and the ability to research an issue, determine the root cause, and implement the resolution; research of various sources such as databricks/AWS/tableau documentation that may be required to identify and resolve issues. Must have the ability to prioritize issues and multi-task. EXPERIENCE/EDUCATION Requires a Bachelors degree in computer science or other related field plus 10+ years of hands-on experience in configuring and managing AWS/tableau and databricks solution. Experience with Databricks and tableau environment is desired.

Posted 1 week ago

Apply

7.0 - 12.0 years

14 - 18 Lacs

Gurugram, Bengaluru

Work from Office

Summary The Data engineer is responsible for managing and operating upon Databricks, Dbt, SSRS, SSIS, AWS DWS, AWS APP Flow, PowerBI/Tableau . The engineer will work closely with the customer and team to manage and operate cloud data platform. Job Description Leads Level 4 operational coverage: Resolving pipeline issues / Proactive monitoring for sensitive batches / RCA and retrospection of issues and documenting defects. Design, build, test and deploy fixes to non-production environment for Customer testing. Work with Customer to deploy fixes on production upon receiving Customer acceptance of fix. Cost / Performance optimization and Audit / Security including any associated infrastructure changes Troubleshooting incident/problem, includes collecting logs, cross-checking against known issues, investigate common root causes (for example failed batches, infra related items such as connectivity to source, network issues etc.) Knowledge Management: Create/update runbooks as needed . Governance: Watch all the configuration changes to batches and infrastructure (cloud platform) along with mapping it with proper documentation and aligning resources. Communication: Lead and act as a POC for customer from off-site, handling communication, escalation, isolating issues and coordinating with off-site resources while level setting expectation across stakeholders Change Management: Align resources for on-demand changes and coordinate with stakeholders as required Request Management: Handle user requests if the request is not runbook-based create a new KB or update runbook accordingly Incident Management and Problem Management, Root cause Analysis, coming up with preventive measures and recommendations such as enhancing monitoring or systematic changes as needed. Skill Good hands-on Databricks, Dbt, SSRS, SSIS, AWS DWS, AWS APP Flow, PowerB I/ Tableau Ability to read and write sql and stored procedures. Good hands-on experience in configuring, managing and troubleshooting along with general analytical and problem-solving skills. Excellent written and verbal communication skills. Ability to communicate technical info and ideas so others will understand. Ability to successfully work and promote inclusiveness in small groups. Experience/Education Requires a Bachelors degree in computer science or other related field plus 10+ years of hands-on experience in configuring and managing AWS/tableau and databricks solutions. Experience with Databricks and tableau environment is desired. JOb Complexity This role requires extensive problem-solving skills and the ability to research an issue, determine the root cause, and implement the resolution; research of various sources such as databricks/AWS/tableau documentation that may be required to identify and resolve issues. Must have the ability to prioritize issues and multi-task. Work Location - Remote Work From Home Shift: Rotation Shifts (24/7)

Posted 1 week ago

Apply

9.0 - 14.0 years

0 - 3 Lacs

Bengaluru

Remote

Job Description: As a GCP Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Google Cloud Platform. You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Databricks, Python, SQL, PySpark / Scala, and Informatica will be essential for performing the following key responsibilities: Key Responsibilities: Designing and developing data pipelines: Design and implement scalable and efficient data pipelines using GCP-native services (e.g., Cloud Composer, Dataflow, BigQuery) and tools like Databricks, PySpark, and Scala. This includes data ingestion, transformation, and loading (ETL/ELT) processes. Data modeling and database design: Develop data models and schema designs to support efficient data storage and analytics using tools like BigQuery, Cloud Storage, or other GCP-compatible storage solutions. Data integration and orchestration: Orchestrate and schedule complex data workflows using Cloud Composer (Apache Airflow) or similar orchestration tools. Manage end-to-end data integration across cloud and on-premises systems. Data quality and governance: Implement data quality checks, validation rules, and governance processes to ensure data accuracy, integrity, and compliance with organizational standards and external regulations. Performance optimization: Optimize pipelines and queries to enhance performance and reduce processing time, including tuning Spark jobs, SQL queries, and leveraging caching mechanisms or parallel processing in GCP. Monitoring and troubleshooting: Monitor data pipeline performance using GCP operations suite (formerly Stackdriver) or other monitoring tools. Identify bottlenecks and troubleshoot ingestion, transformation, or loading issues. Documentation and collaboration: Maintain clear and comprehensive documentation for data flows, ETL logic, and pipeline configurations. Collaborate closely with data scientists, business analysts, and product owners to understand requirements and deliver data engineering solutions. Skills and Qualifications: 5+ years of experience in a Data Engineer role with exposure to large-scale data processing. Strong hands-on experience with Google Cloud Platform (GCP), particularly services like BigQuery, Cloud Storage, Dataflow, and Cloud Composer. Proficient in Python and/or Scala, with a strong grasp of PySpark. Experience working with Databricks in a cloud environment. Solid experience building and maintaining big data pipelines, architectures, and data sets. Strong knowledge of Informatica for ETL/ELT processes. Proven track record of manipulating, processing, and extracting value from large-scale, unstructured datasets. Working knowledge of stream processing and scalable data stores (e.g., Kafka, Pub/Sub, BigQuery). Solid understanding of data modeling concepts and best practices in both OLTP and OLAP systems. Familiarity with data quality frameworks, governance policies, and compliance standards. Skilled in performance tuning, job optimization, and cost-efficient cloud architecture design. Excellent communication and collaboration skills to work effectively in cross-functional and client-facing roles. Bachelor's degree in Computer Science, Information Systems, or a related field (Mathematics, Engineering, etc.). Bonus: Experience with distributed computing frameworks like Hadoop and Spark

Posted 1 week ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Coimbatore

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 1 week ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Kanpur

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 1 week ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Chandigarh

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 1 week ago

Apply

6.0 - 11.0 years

10 - 20 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Big data analysis Inventory Metadata modelling and development Experience working with Graph DB (Neo4j) Proficient with TMF-633, 634, 638 & 639 standards Relational to Graph database migration. Knowledge of network concepts and data model exposure Required Candidate profile Data Scientist with min 6 years of experience, Experience in Big Data analysis Inventory Metadata Experience working with Graph DB (Neo4j) Proficient with TMF-633, 634, 638 & 639 standards

Posted 1 week ago

Apply

3.0 - 7.0 years

5 - 9 Lacs

Kochi

Work from Office

Roles & Responsibilities: 1. Should have strong QA Function expertise 2. Well-versed with data, UI, and Automation testing experience 3. Willingness to Learn AI-driven testing (Vibe Coding/Cursor), etc 4. Experience in data projects like Big Data, Azure DBT, and MSBI 5. Should have a team management ecperience.

Posted 1 week ago

Apply

4.0 - 7.0 years

25 - 30 Lacs

Ahmedabad

Work from Office

ManekTech is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 1 week ago

Apply

5.0 - 10.0 years

5 - 14 Lacs

Chennai

Hybrid

Position Description: The Analytics Service department provides system planning, engineering and operations support for enterprise Descriptive and Predictive Analytics products, as well as Big Data solutions and Analytics Data Management products. These tools are used by the Global Data Insights and Analytics (GDIA) team, data scientists, and IT service delivery partners globally to build line-of-business applications which are directly used by the end-user community. Products and platforms include Power BI, Alteryx, Informatica, Google Big Query, and more - all of which are critical to Ford's rapidly evolving needs in the area of Analytics and Big Data. In addition, business intelligence reporting products such as Business Objects, Qlik Sense and WebFOCUS are used by our core line of businesses for both employees and dealers. This position is part of the Descriptive Analytics team. It is a Full Stack Engineering and Operations position, engineering and operating our strategic Power BI dashboarding and visualization platform and other products as required, such as Qlik Sense, Alteryx, Business Objects, WebFOCUS, Looker, and other new platforms as they are introduced. The person in this role will collaborate with team members to produce well-tested and documented run books, test cases, and change requests, and handle change implementations as needed. The candidate will start with primarily Operational tasks until the products are well understood and will then progress to assisting with Engineering tasks. Skills Required: Power BI Experience Required: Position Qualifications: • Bachelors Degree in a relevant field • At least 5 years of experience with Descriptive Analytics technologies such as Power BI, Qlik Sense, Looker, Looker Studio, and WebFOCUS, or similar platforms • System Administrator experience managing large multi-tenant Windows Server environments based on GCP Compute Engines or OpenShift Virtualization VMs • Some Dev/Ops experience with Github, Tekton pipelines, Terraform code, Google Cloud Services, and PowerShell and managing large GCP installations • Strong troubleshooting and problem-solving skills • Understanding of Product Life Cycle • Ability to coordinate issue resolution with vendors on behalf of Ford • Strong written and verbal communication skills • Understanding of technologies like GCP, Azure, Big Query, Teradata, SQL Server, Oracle DB2, etc. • Basic understanding of database connectivity and authentication methods (ODBC, JDBC, drivers, REST, WIF, Cloud SA and vault keys, etc.) Experience Preferred: Recommended: • Experience with PowerApps and Power Automate • Familiarity with Jira • Familiarity with Fords EAA, RTP, and EAMS processes and Ford security policies (GRC) Education Required: Bachelor's Degree Education Preferred: Bachelor's Degree Additional Information : Position Duties: • Evaluate and engineer existing and upcoming analytics technologies for Enterprise consumption in both Google Cloud and on-prem environments • Develop Onboarding, Operational and Disaster Recovery procedures • Develop new tools and processes to ensure effective implementation and use of the technologies • Maintain custom installation guides that are consistent with Ford IT security policy • Document day to day processes, installation and desk procedures (Run Books, Operational Manuals, SharePoint, Knowledgebase, etc.) • Engage with customers and power users globally using MS Teams and Viva Engage to assist with (non-software-development) infrastructure and connectivity issues • Monitor and analyze usage data to ensure optimal performance of the infrastructure; implement permanent corrective actions as needed • Provide training, consultation and L2/L3 operational support • Perform regular dashboard and visualization platform administration tasks • Provide SME support to users for issue resolution and follow ITL processes for request, incident, change, event and problem management • Provide product consultation for dedicated deployments 22

Posted 1 week ago

Apply

4.0 - 9.0 years

12 - 20 Lacs

Bengaluru

Work from Office

About the Company : Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Role: Python Lead/Developer Experience: 4-10 Years Skill Set: Python, Flask, Pandas, NumPy, Matplotlib, Plotly , SQL Location: Pune, Hyderabad, Gurgaon Job Summary : Are you a seasoned software developer with 5 to 10 years of experience and a passion for building scalable backend systems? Were looking for someone just like you! Location: Bangalore Qualification: Bachelors degree in Engineering Availability: Immediate joiners preferred Key Responsibilities: Design and develop scalable Python applications using FastAPI Collaborate with cross-functional teams including front-end, data science, and DevOps Work with libraries like Pandas, NumPy, Scikit-learn for data-driven solutions Build and maintain robust backend APIs and database integrations Implement unit, integration, and end-to-end testing Contribute to architecture and design using best practices Mandatory Skills: Strong Python expertise with data libraries (Pandas, NumPy, Matplotlib, Plotly) Experience with FastAPI/FlaskAPI , SQL/NoSQL (MongoDB, Postgres, CRDB) Middleware orchestration (Mulesoft, BizTalk) CI/CD pipelines, RESTful APIs, OOP, and design patterns Desirable Skills: Familiarity with OpenAI tools (GitHub Copilot, ChatGPT API) Experience with Azure , Big Data , Kafka/RabbitMQ , Docker/Kubernetes Exposure to distributed and high-volume backend systems If interested, then Please share your updated resume with below details at pragati.jha@gspann.com LinkedIn Profile Link: Position Applied For: Full Name: Contact Number: Email ID: Total Experience: Relevant Experience: Current Company: Current Salary: Expected Salary: Notice Period: Last Working Day (LWD): Any Offers in Hand? Current Location: Preferred Location: Comfortable for 5 days a Week? Are you comfortable coming for Face to Face for client round after initial online rounds? Skills and Rating (Scale of 5): Data Engineer: Python: Flask: Cloud (which cloud): SQL: Any other technologies: Please confirm Interview Availability: Time slot pls confirm Once we have these details, we can move forward with the next steps.

Posted 1 week ago

Apply

4.0 - 7.0 years

6 - 10 Lacs

Hyderabad, Gurugram, Ahmedabad

Work from Office

About the Role: Grade Level (for internal use): 10 The Team: Do you love to collaborate & provide solutions? This team comes together across eight different locations every single day to craft enterprise grade applications that serve a large customer base with growing demand and usage. You will use a wide range of technologies and cultivate a collaborative environment with other internal teams. The Impact: We focus primarily developing, enhancing and delivering required pieces of information & functionality to internal & external clients in all client-facing applications. You will have a highly visible role where even small changes have very wide impact. Whats in it for you? - Opportunities for innovation and learning new state of the art technologies - To work in pure agile & scrum methodology Responsibilities: Design, and implement software-related projects. Perform analyses and articulate solutions. Design underlying engineering for use in multiple product offerings supporting a large volume of end-users. Develop project plans with task breakdowns and estimates. Manage and improve existing solutions. Solve a variety of complex problems and figure out possible solutions, weighing the costs and benefits. What were Looking For: Basic Qualifications: Bachelor's degree in Computer Science or Equivalent 7+ years related experience Passionate, smart, and articulate developer Strong C#, .Net and SQL skills Experience implementing: Web Services (with WCF, RESTful JSON, SOAP, TCP), Windows Services, and Unit Tests Dependency Injection Able to demonstrate strong OOP skills Able to work well individually and with a team Strong problem-solving skills Good work ethic, self-starter, and results-oriented Agile/Scrum experience a plus. Exposure to Data Engineering and Big Data technologies like Hadoop, Big data processing engines/Scala, Nifi and ETL is a plus. Experience of Container platforms is a plus Experience working in cloud computing environment like AWS, Azure , GCP etc. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & Wellness: Health care coverage designed for the mind and body. Family Friendly Perks: Its not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity . ----------------------------------------------------------- S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. ----------------------------------------------------------- , SWP Priority Ratings - (Strategic Workforce Planning)

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies