Jobs
Interviews

4902 Data Processing Jobs - Page 48

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 9.0 years

20 - 25 Lacs

kolkata, mumbai, new delhi

Work from Office

Design and implement robust, scalable ETL/ELT pipelines using AWS-native tools Ingest and transform data from multiple sources into S3 , applying schema discovery via AWS Glue Crawlers Develop and orchestrate workflows using Apache Airflow , AWS Step Functions , and Lambda functions Build and optimize data models in Amazon Redshift for analytics consumption Manage and enforce IAM-based access control , ensuring secure data practices Write clean, modular, and reusable code in PySpark and SQL for large-scale data processing Implement monitoring, alerting, and CI/CD pipelines to improve deployment efficiency and reliability Work closely with business stakeholders and analysts to understand data requirements and deliver meaningful insights Participate in code reviews and knowledge-sharing activities across teams. Understands scrum and comfortable working in an Agile environment. Required Skills 4+ years of experience as a Data Engineer, with at least 3+ years working in cloud-native environments (preferably AWS ) Hands-on experience with S3 , Redshift , Glue (ETL & Crawlers) , Lambda , Step Functions , and Airflow Strong programming skills in PySpark and SQL Experience designing and implementing data lakes , data warehouses , and real-time/near-real-time pipelines Familiarity with DevOps , CI/CD pipelines , and infrastructure as code tools (e.g., Git, CloudFormation, Terraform) Understanding of data governance , data security , and role-based access control in cloud environments Strong problem-solving skills and ability to work independently as well as collaboratively Excellent written and verbal communication skills Nice to Have Experience working in domains such as nonprofit, healthcare, or campaign marketing Familiarity with AWS Notebooks, Athena, and CloudWatch Exposure to data observability tools, testing frameworks, or event-driven architectures Experience mentoring junior engineers or leading small teams

Posted 3 weeks ago

Apply

2.0 - 4.0 years

10 - 13 Lacs

bengaluru

Work from Office

End-to-End Project Management Oversee timelines, deliverables, and resources across multiple projects Collaborating across teams and stakeholders to running projects and programs in the DS and School Success function Leverage and help evolve internal AI and Excel-based tools; Identify process inefficiencies and recommend scalable improvements. Create internal tools and perform data processing tasks as requested by the school through excel tools and functionalities. Is this someone that looks like you Experience: 2-4 years of experience working in growth-stage start-ups Loves solving complex problems Can work across teams and execute with high ownership Has good attention to detail Has good Excel skills and is comfortable with large volumes of data We deeply value building the right culture at Toddle, and these are a few things that we look for in each hire Coach-ability, Curiosity, Ownership, Hustle and Humility Excited about the role Flexibility at work: Work from anywhere home, a co-working space, a caf , or even the hills. Blocked no-meeting hours to enable uninterrupted, focused work. Exposure to diverse learning opportunities: Work across different projects and teams to develop skills outside of your core expertise. Learning budget: Access to a small budget for books, online courses, and subscriptions. Industry-best leave policy: No cap on the number of sick or casual leaves. Special paid leaves for childbirth, weddings, and more. No questions asked on menstrual leave No bell-curve performance evaluations: We hire the best, and we trust you to deliver your best. A fun and diverse team to collaborate and grow with

Posted 3 weeks ago

Apply

5.0 - 8.0 years

9 - 13 Lacs

kolkata, mumbai, new delhi

Work from Office

Develop and implement innovative AI models and algorithms tailored to business needs. Collaborate with data scientists, software developers, and product managers to define project objectives and deliver high-quality AI solutions. Optimize machine learning models for performance, scalability, and real-time data processing. Participate in code reviews, architecture discussions, and knowledge sharing sessions to foster a culture of continuous improvement. Stay updated on the latest advancements in AI technologies and techniques, and evaluate their applicability to ongoing projects. Mentor and guide junior team members in AI development practices and principles. Bachelors or Master s degree in Computer Science, Artificial Intelligence, or a related field. 5+ years of experience in AI development or a related role. Strong proficiency in programming languages such as Python, Java, or

Posted 3 weeks ago

Apply

2.0 - 7.0 years

1 - 2 Lacs

kolkata

Work from Office

-Manage Excel data & prepare MIS reports -Support analysis, costing & order tracking -Ensure accuracy, confidentiality & timely delivery -Proficient in Advanced Excel (Pivot, VLOOKUP, HLOOKUP etc.) -Detail-oriented, proactive & a strong team player

Posted 3 weeks ago

Apply

9.0 - 13.0 years

12 - 17 Lacs

hyderabad

Work from Office

Job Description Position: Team Lead Invoice & Data Analysis (US Telecom) Employment Type: Full-Time Location: Hyderabad (Work From Office) Role Overview We are seeking an experienced Team Lead Invoice & Data Analysis (US Telecom) to oversee invoice reconciliation and data research processes. The role requires strong analytical expertise, proven team leadership, and in-depth knowledge of US telecom invoice processing. You will lead a team of analysts, manage timelines, ensure high-quality deliverables, and drive continuous improvement in data accuracy and process efficiency. Key Responsibilities Lead and manage a team of 9 - 14 data analysts working on invoice reconciliation and data research. Ensure accuracy and efficiency in invoice processing, reconciliation, and analysis. Identify and resolve data discrepancies while maintaining high-quality standards. Collaborate with clients, vendors, and internal stakeholders to resolve issues and align on deliverables. Develop and implement process improvement strategies for invoice reconciliation and data analysis. Leverage tools like Excel, Power BI, and Tableau to analyze data, identify trends, and present findings. Establish and monitor quality control procedures to ensure compliance with client requirements. Prepare reports, dashboards, and presentations highlighting key activities, insights, and recommendations. Train, mentor, and guide team members on data analysis techniques and best practices. Manage multiple priorities in a fast-paced environment while meeting deadlines and SLAs. Required Skills & Competencies Strong analytical and problem-solving skills with keen attention to detail. Proficiency in Microsoft Excel (Advanced), Power BI, Tableau, and MS Office Suite . Experience in US Telecom invoice processing and strong secondary research capabilities. Excellent communication skills (written and verbal). Strong interpersonal and leadership abilities with experience managing large teams. Ability to streamline processes, identify opportunities for automation, and enhance efficiency. Qualifications Graduate / Postgraduate in Commerce, Business Management, or equivalent discipline . 9+ years of experience in data research & analysis, including 2 - 3 years of team/people management . Industry Telecom / Data Research / Analytics

Posted 3 weeks ago

Apply

4.0 - 6.0 years

7 - 12 Lacs

gurugram

Work from Office

As a Senior Big Data Platform Engineer at Incedo, you will be responsible for designing and implementing big data platforms to support large-scale data integration projects. You will work with data architects and data engineers to define the platform architecture and build the necessary infrastructure. You will be skilled in big data technologies such as Hadoop, Spark, and Kafka and have experience in cloud computing platforms such as AWS or Azure. You will be responsible for ensuring the performance, scalability, and security of the big data platform and troubleshooting any issues that arise. Roles & Responsibilities: Designing, developing and maintaining large-scale big data platforms using technologies like Hadoop, Spark and Kafka Creating and managing data warehouses, data lakes and data marts Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Troubleshooting and resolving big data platform issues Collaborating with other teams to ensure the consistency and integrity of data Technical Skills Skills Requirements: Experience with big data processing technologies such as Apache Hadoop, Apache Spark, or Apache Kafka. Understanding of distributed computing concepts such as MapReduce, Spark RDDs, or Apache Flink data streams. Familiarity with big data storage solutions such as HDFS, Amazon S3, or Azure Data Lake Storage. Knowledge of big data processing frameworks such as Apache Hive, Apache Pig, or Apache Impala. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 3 weeks ago

Apply

5.0 - 10.0 years

7 - 11 Lacs

bengaluru

Work from Office

Department - ISS Reports To - Data Engineers Lead Level - 4 About your team ISS Data Engineering Chapter is an engineering group comprised of three sub-chapters - Data Engineers, Data Platform and Data Visualisation that supports the ISS Department. Fidelity is embarking on several strategic programmes of work that will create a data platform to support the next evolutionary stage of our Investment Process. These programmes span across asset classes and include Portfolio and Risk Management, Fundamental and Quantitative Research and Trading. About your role This role sits within the ISS Data Engineering. The Data Engineering team is responsible for building and maintaining the data applications and platform that enables the ISS business to operate. This role is appropriate for a senior data engineer capable of taking ownership and a delivering a subsection of the wider data platform or data applications. Strategic Impact As a Senior Data Engineer, you will directly contribute to our key organizational objectives: Cost Efficiency Improve productivity through automating routine maintenance tasks Decrease time to deliver continuous improvement initiatives Optimize infrastructure costs through efficient database design Risk Mitigation Improve system documentation and knowledge transfer Increase automated testing coverage Enhance system and location resiliency. Key Responsibilities - Design, develop, and optimize complex SQL queries, stored procedures, and data models for Oracle-based systems Create and maintain efficient data pipelines for extract, transform, and load (ETL) processes Design and implement data quality controls and validation processes to ensure data integrity Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications Establish documentation standards and ensure comprehensive system documentation Lead troubleshooting efforts and resolve complex database performance issues Design and implement integrations between Oracle systems and cloud services, particularly AWS S3 Lead code reviews and establish best practices for database development Design and implement data migration strategies from legacy systems to modern cloud-based solutions Work within an Agile framework, taking a leadership role in sprint planning and technical discussions. About you - Required Qualifications: 5-10 years of experience with Oracle databases, including advanced SQL and PL/SQL development Strong knowledge of data modelling principles and database design patterns Proficiency with Python for data processing and automation Extensive experience implementing and maintaining data quality controls Proven ability to reverse engineer existing database schemas and understand complex data relationships Strong experience with version control systems, preferably Git/GitHub Excellent written communication skills for technical documentation Demonstrated ability to work within and help guide Agile development methodologies Strong knowledge of investment management industry concepts, particularly security reference data, fund reference data, transactions, orders, holdings, and fund accounting. Additional Qualifications: Knowledge of SQL Server development Experience with ETL tools like Informatica and Control-M Unix shell scripting skills for data processing Experience with CI/CD pipelines for database code Familiarity with AWS services, particularly S3, Lambda, and Step Functions Knowledge of database security best practices Experience with data visualization tools (Tableau, Power BI) Experience mentoring junior developers.

Posted 3 weeks ago

Apply

2.0 - 7.0 years

4 - 8 Lacs

hyderabad, pune, bengaluru

Work from Office

Your Role Developing back-end code logic that leverages semantic object linking (ontologies) within Palantir Foundry Pipeline Builder, Code Workbook, and Ontology Manager. Creating servers, databases, and datasets for functionality as needed. Ensuring health of data connections and pipelines (utilizing filesystem, JDBC, SFTP, and webhook). Ensuring conformance with security protocols and markings on sensitive data sets. Ensuring responsiveness of web applications developed on low code/no code solutions. Ensuring cross-platform optimization for mobile phones. Seeing through projects from conception to finished product. Meeting both technical and customer needs. Staying abreast of developments in web applications and programming languages. Proficiency with fundamental front-end languages such as HTML, CSS, MySQL, Oracle, MongoDB, and JavaScript preferred. Proficiency with server-side languages for structured data processing; Python, Py Spark, Java, Apache Spark, and Spark SQL preferred. Your Profile Strong customer orientation, decision making, problem solving, communication and presentation skills Very good judgement skills and ability to shape compelling solutions and solve unstructured problems with assumptions Very good collaboration skills and ability to interact with multi-cultural and multi-functional teams spread across geographies Strong executive presence andspirit Superb leadership and team building skills with ability to build consensus and achieve goals through collaboration rather than direct line authority What youll love about working here You can shape your career with us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. You will have the opportunity to learn on one of the industry"s largest digital learning platforms, with access to 250,000+ courses and numerous certifications. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem. Location - Bengaluru,Hyderabad,Pune,Mumbai,Chennai

Posted 3 weeks ago

Apply

3.0 - 6.0 years

5 - 9 Lacs

bengaluru

Work from Office

Your Role We are seeking an experienced Data Engineer with strong expertise in SQL, Spark SQL, Databricks (on Azure or AWS), Unity Catalog, and PySpark to design, build, and optimize modern data solutions. The ideal candidate will also bring in-depth knowledge of data warehousing concepts and best practices to support scalable, high-performance data platforms. In this role you will play a key role in 410 years of experience in Data Engineering / ETL Development. Strong expertise in SQL and Spark SQL (complex queries, optimization, performance tuning). Hands-on experience with Databricks on Azure or AWS (Delta Lake, Lakehouse). Proficiency in PySpark for data processing. Experience with Unity Catalog for data governance, security, and access management. Solid understanding of data warehousing principles, dimensional modeling, and best practices. Knowledge of Azure Data Services (ADLS, ADF, Synapse) or AWS Data Services (S3, Glue, Redshift, Athena, etc.) is a plus. Your Profile Strong customer orientation, decision making, problem solving, communication and presentation skills Very good judgement skills and ability to shape compelling solutions and solve unstructured problems with assumptions Very good collaboration skills and ability to interact with multi-cultural and multi-functional teams spread across geographies Strong executive presence andspirit Superb leadership and team building skills with ability to build consensus and achieve goals through collaboration rather than direct line authority

Posted 3 weeks ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

bengaluru

Work from Office

About The Role Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy of the organization, ensuring that data solutions are efficient, scalable, and aligned with business objectives. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the design and implementation of data architecture and data models.- Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data processing frameworks and ETL tools.- Experience with data warehousing concepts and technologies.- Familiarity with cloud platforms such as AWS, Azure, or Google Cloud.- Knowledge of database management systems and SQL. Additional Information:- The candidate should have minimum 3 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

4.0 - 9.0 years

20 - 35 Lacs

pune, gurugram, bengaluru

Hybrid

Salary: 20 to 35 LPA Exp: 3 to 8 years Location: Pune/Bangalore/Gurgaon(Hybrid) Notice: Immediate only..!! Key Skills: SQL, Advance SQL, BI tools etc Roles and Responsibilities Extract, manipulate, and analyze large datasets from various sources such as Hive, SQL databases, and BI tools. Develop and maintain dashboards using Tableau to provide insights on banking performance, market trends, and customer behavior. Collaborate with cross-functional teams to identify key performance indicators (KPIs) and develop data visualizations to drive business decisions. Desired Candidate Profile 3-8 years of experience in Data Analytics or related field with expertise in Banking Analytics, Business Intelligence, Campaign Analytics, Marketing Analytics, etc. . Strong proficiency in tools like Tableau for data visualization; Advance SQL knowledge preferred. Experience working with big data technologies like Hadoop ecosystem (Hive), Spark; familiarity with Python programming language required.

Posted 3 weeks ago

Apply

13.0 - 17.0 years

32 - 35 Lacs

gurugram

Work from Office

Google Cloud Platform o GCS, DataProc, Big Query, Data Flow Programming Languages o Java, Scripting Languages like Python, Shell Script, SQL Google Cloud Platform o GCS, DataProc, Big Query, Data Flow 5+ years of experience in IT application delivery with proven experience in agile development methodologies 1 to 2 years of experience in Google Cloud Platform (GCS, DataProc, Big Query, Composer, Data Processing like Data Flow)

Posted 3 weeks ago

Apply

12.0 - 14.0 years

25 - 30 Lacs

chennai

Work from Office

The Solution Architect Data Engineer will design, implement, and manage data solutions for the insurance business, leveraging expertise in Cognos, DB2, Azure Databricks, ETL processes, and SQL. The role involves working with cross-functional teams to design scalable data architectures and enable advanced analytics and reporting, supporting the company's finance, underwriting, claims, and customer service operations. Key Responsibilities: Data Architecture & Design: Design and implement robust, scalable data architectures and solutions in the insurance domain using Azure Databricks, DB2, and other data platforms. Data Integration & ETL Processes: Lead the development and optimization of ETL pipelines to extract, transform, and load data from multiple sources, ensuring data integrity and performance. Cognos Reporting: Oversee the design and maintenance of Cognos reporting systems, developing custom reports and dashboards to support business users in finance, claims, underwriting, and operations. Data Engineering: Design, build, and maintain data models, data pipelines, and databases to enable business intelligence and advanced analytics across the organization. Cloud Infrastructure: Develop and manage data solutions on Azure, including Databricks for data processing, ensuring seamless integration with existing systems (e.g., DB2, legacy platforms). SQL Development: Write and optimize complex SQL queries for data extraction, manipulation, and reporting purposes, with a focus on performance and scalability. Data Governance & Quality: Ensure data quality, consistency, and governance across all data solutions, implementing best practices and adhering to industry standards (e.g., GDPR, insurance regulations). Collaboration: Work closely with business stakeholders, data scientists, and analysts to understand business needs and translate them into technical solutions that drive actionable insights. Solution Architecture: Provide architectural leadership in designing data platforms, ensuring that solutions meet business requirements, are cost-effective, and can scale for future growth. Performance Optimization: Continuously monitor and tune the performance of databases, ETL processes, and reporting tools to meet service level agreements (SLAs). Documentation: Create and maintain comprehensive technical documentation including architecture diagrams, ETL process flows, and data dictionaries. Required Qualifications: Bachelors or Masters degree in Computer Science, Information Systems, or a related field. Proven experience as a Solution Architect or Data Engineer in the insurance industry, with a strong focus on data solutions. Hands-on experience with Cognos (for reporting and dashboarding) and DB2 (for database management). Proficiency in Azure Databricks for data processing, machine learning, and real-time analytics. Extensive experience in ETL development, data integration, and data transformation processes. Strong knowledge of Python, SQL (advanced query writing, optimization, and troubleshooting). Experience with cloud platforms (Azure preferred) and hybrid data environments (on-premises and cloud). Familiarity with data governance and regulatory requirements in the insurance industry (e.g., Solvency II, IFRS 17). Strong problem-solving skills, with the ability to troubleshoot and resolve complex technical issues related to data architecture and performance. Excellent verbal and written communication skills, with the ability to work effectively with both technical and non-technical stakeholders. Preferred Qualifications: Experience with other cloud-based data platforms (e.g., Azure Data Lake, Azure Synapse, AWS Redshift). Knowledge of machine learning workflows, leveraging Databricks for model training and deployment. Familiarity with insurance-specific data models and their use in finance, claims, and underwriting operations. Certifications in Azure Databricks, Microsoft Azure, DB2, or related technologies. Knowledge of additional reporting tools (e.g., Power BI, Tableau) is a plus. Key Competencies: Technical Leadership: Ability to guide and mentor development teams in implementing best practices for data architecture and engineering. Analytical Skills: Strong analytical and problem-solving skills, with a focus on optimizing data systems for performance and scalability. Collaborative Mindset: Ability to work effectively in a cross-functional team, communicating complex technical solutions in simple terms to business stakeholders. Attention to Detail: Meticulous attention to detail, ensuring high-quality data output and system performance.

Posted 3 weeks ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

hyderabad

Work from Office

About The Role Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the optimization of data processing workflows to enhance efficiency.- Collaborate with stakeholders to gather requirements and translate them into technical specifications. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and data integration techniques.- Familiarity with cloud platforms such as AWS or Azure for data storage and processing.- Knowledge of data quality frameworks and best practices. Additional Information:- The candidate should have minimum 3 years of experience in PySpark.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

2.0 - 4.0 years

3 - 7 Lacs

noida

Work from Office

Position Summary: Perform DataLake(Azure Databricks) operations on healthcare data from multiple sources. To succeed in this role, the candidate should be analytical and excellent communicator. Experience in the healthcare industry is a plus. Experience integrating data from disparate sources in MS SQL and DataLake Environment. You will be responsible towards working with different stakeholders to accomplish business and operation goals. Key Duties & Responsibilities: Data processing (ETL) using MSSQL, DataLake (Azure Databricks), Python, Scala, GitHub with T-SQL stored procedures, views, and other various database objects; import and export processing; data conversions; business process workflows and metrics reporting. Providing client support services and enhancements. Controlling daily ticket resolution/prioritization as client and user volume increases. Prioritizing issues based on client expectations, volume of current tickets, and visibility of issues across the enterprise. Analyzing the overall enterprise environment to find gaps and can think outside-of-the-box in order to design and create functionality which will prove to be of value. Provide DataLake (Databricks), Python, SQL, Scala training to other technicians. Drive ticket resolution momentum and provide feedback to US Leadership where staff improvements can be made in order to better overall productivity of the technicians. Manage DataLake (Databricks), Python, Scala, SQL database objects (stored procedures, views, synonyms, tables and overall schema), reporting, and administration. Skills 2-4 years of experience writing T-SQL, DataLake (Databricks), code to triage issues, analyse data, and optimize database objects. 1-3 years of experience of troubleshooting using TSQL, DataLake (DataBricks), GitHub. 1-2 years of experience in ETL flat file/real-time message data loading. Key Competencies :- Takes full responsibility for meeting the clients level of satisfaction. Prioritizes work and sets realistic deadlines to ensure that important tasks are achieved on or ahead of time, with quality results. Shares own expertise with team members, while remaining open to others' ideas. Feels comfortable working in a changing environment. Identify area of process improvement and automation Finds flexible and rapid solutions to meet the clients needs. Takes controlled risks, seeking support from team members when unsure. Help team members with your expertise to archive common goal.

Posted 3 weeks ago

Apply

1.0 - 2.0 years

3 - 5 Lacs

ahmedabad

Work from Office

Google Cloud Platform o GCS, DataProc, Big Query, Data Flow Programming Languages o Java, Scripting Languages like Python, Shell Script, SQL Google Cloud Platform o GCS, DataProc, Big Query, Data Flow 5+ years of experience in IT application delivery with proven experience in agile development methodologies 1 to 2 years of experience in Google Cloud Platform (GCS, DataProc, Big Query, Composer, Data Processing like Data Flow)

Posted 3 weeks ago

Apply

3.0 - 5.0 years

2 - 5 Lacs

pune

Work from Office

About The Role Skill required: Retirement Solutions - Data Entry Services Designation: Customer Service Analyst Qualifications: Any Graduation Years of Experience: 3 to 5 years What would you do? We help insurers redefine their customer experience while accelerating their innovation agenda to drive sustainable growth by transforming to an intelligent operating model. Intelligent Insurance Operations combines our advisory, technology, and operations expertise, global scale, and robust ecosystem with our insurance transformation capabilities. It is structured to address the scope and complexity of the ever-changing insurance environment and offers a flexible operating model that can meet the unique needs of each market segment.Retirement solution is a comprehensive process to understand how much money you will need when you retire. Retirement solution also helps you identify the bestProcess of inputting, updating, or managing information in various digital formats. This includes tasks such as typing data into spreadsheets or databases, verifying accuracy, and ensuring that information is organized and accessible. Data entry services are often outsourced to specialized providers for efficiency and accuracy in handling large volumes of data. What are we looking for? Experience in retirement plan administration or third-party administration (TPA) environments.Familiarity with transaction types such as rollovers, loan repayments, and plan-to-plan transfers.Knowledge of ERISA regulations and retirement plan compliance standards.Experience working with IT or technical teams to coordinate data processing or system updates.2+ years of experience in financial operations, preferably within a retirement services or recordkeeping environment.Hands-on experience with the Omni recordkeeping system is required.Strong analytical and problem-solving skills with a high attention to detail.Proficiency in Microsoft Excel, including advanced formulas (e.g., VLOOKUP, INDEX/MATCH, pivot tables, conditional logic).Excellent communication and collaboration skills. Investigate and resolve discrepancies in financial transactions, including contributions, distributions, transfers, and account adjustments.Process corrections and adjustments within the Omni recordkeeping platform, ensuring compliance with internal controls and regulatory standards.Prepare and provide detailed transaction files and specifications to the IT Production Support team for bulk processing.Review and validate the accuracy of bulk transaction results post-processing, identifying and escalating any anomalies.Collaborate with internal teams including Finance, Client Services, and IT to resolve transaction issues and improve data accuracy.Monitor exception reports and transaction logs to proactively identify and address anomalies.Maintain thorough documentation of all correction activities for audit and compliance purposes.Support month-end and year-end reconciliation and reporting processes.Contribute to the development and refinement of standard operating procedures (SOPs) related to financial corrections and Omni usage. Roles and Responsibilities: In this role you are required to do analysis and solving of lower-complexity problems Your day to day interaction is with peers within Accenture before updating supervisors In this role you may have limited exposure with clients and/or Accenture management You will be given moderate level instruction on daily work tasks and detailed instructions on new assignments The decisions you make impact your own work and may impact the work of others You will be an individual contributor as a part of a team, with a focused scope of work Please note that this role may require you to work in rotational shifts Qualification Any Graduation

Posted 3 weeks ago

Apply

5.0 - 7.0 years

13 - 15 Lacs

chennai

Work from Office

Responsibilities: Design, develop, and maintain robust and scalable backend systems using Django and Python. Develop RESTful APIs using Django REST Framework to power our frontend applications. Implement efficient database solutions using PostgreSQL and Django ORM. Write clean, well-documented, and maintainable code. Collaborate with the frontend team to ensure seamless integration between frontend and backend components. Optimize application performance and scalability. Implement security best practices to protect our applications and user data. Stay up-to-date with the latest technologies and industry trends. Contribute to the development of new features and improvements. Skills: Django Django Custom UI Python Rest Framework ORM HTML&CSS Chat-GPT Prompting GIT Knowledge SQL Postgres Industry standards and best practices JSON Handling Data Processing Working in Team Environment WhatsApp META API Experience is a PLUS Skills: Django Django Custom UI Python Rest Framework ORM HTML&CSS Chat-GPT Prompting GIT Knowledge SQL Postgres Industry standards and best practices JSON Handling Data Processing Working in Team Environment WhatsApp META API Experience is a PLUS

Posted 3 weeks ago

Apply

5.0 - 10.0 years

4 - 6 Lacs

bengaluru

Work from Office

The Solution Architect Data Engineer will design, implement, and manage data solutions for the insurance business, leveraging expertise in Cognos, DB2, Azure Databricks, ETL processes, and SQL. The role involves working with cross-functional teams to design scalable data architectures and enable advanced analytics and reporting, supporting the company's finance, underwriting, claims, and customer service operations. Key Responsibilities: Data Architecture & Design: Design and implement robust, scalable data architectures and solutions in the insurance domain using Azure Databricks, DB2, and other data platforms. Data Integration & ETL Processes: Lead the development and optimization of ETL pipelines to extract, transform, and load data from multiple sources, ensuring data integrity and performance. Cognos Reporting: Oversee the design and maintenance of Cognos reporting systems, developing custom reports and dashboards to support business users in finance, claims, underwriting, and operations. Data Engineering: Design, build, and maintain data models, data pipelines, and databases to enable business intelligence and advanced analytics across the organization. Cloud Infrastructure: Develop and manage data solutions on Azure, including Databricks for data processing, ensuring seamless integration with existing systems (e.g., DB2, legacy platforms). SQL Development: Write and optimize complex SQL queries for data extraction, manipulation, and reporting purposes, with a focus on performance and scalability. Data Governance & Quality: Ensure data quality, consistency, and governance across all data solutions, implementing best practices and adhering to industry standards (e.g., GDPR, insurance regulations). Collaboration: Work closely with business stakeholders, data scientists, and analysts to understand business needs and translate them into technical solutions that drive actionable insights. Solution Architecture: Provide architectural leadership in designing data platforms, ensuring that solutions meet business requirements, are cost-effective, and can scale for future growth. Performance Optimization: Continuously monitor and tune the performance of databases, ETL processes, and reporting tools to meet service level agreements (SLAs). Documentation: Create and maintain comprehensive technical documentation including architecture diagrams, ETL process flows, and data dictionaries. Required Qualifications: Bachelors or Masters degree in Computer Science, Information Systems, or a related field. Proven experience as a Solution Architect or Data Engineer in the insurance industry, with a strong focus on data solutions. Hands-on experience with Cognos (for reporting and dashboarding) and DB2 (for database management). Proficiency in Azure Databricks for data processing, machine learning, and real-time analytics. Extensive experience in ETL development, data integration, and data transformation processes. Strong knowledge of Python, SQL (advanced query writing, optimization, and troubleshooting). Experience with cloud platforms (Azure preferred) and hybrid data environments (on-premises and cloud). Familiarity with data governance and regulatory requirements in the insurance industry (e.g., Solvency II, IFRS 17). Strong problem-solving skills, with the ability to troubleshoot and resolve complex technical issues related to data architecture and performance. Excellent verbal and written communication skills, with the ability to work effectively with both technical and non-technical stakeholders. Preferred Qualifications: Experience with other cloud-based data platforms (e.g., Azure Data Lake, Azure Synapse, AWS Redshift). Knowledge of machine learning workflows, leveraging Databricks for model training and deployment. Familiarity with insurance-specific data models and their use in finance, claims, and underwriting operations. Certifications in Azure Databricks, Microsoft Azure, DB2, or related technologies. Knowledge of additional reporting tools (e.g., Power BI, Tableau) is a plus. Key Competencies: Technical Leadership: Ability to guide and mentor development teams in implementing best practices for data architecture and engineering. Analytical Skills: Strong analytical and problem-solving skills, with a focus on optimizing data systems for performance and scalability. Collaborative Mindset: Ability to work effectively in a cross-functional team, communicating complex technical solutions in simple terms to business stakeholders. Attention to Detail: Meticulous attention to detail, ensuring high-quality data output and system performance.

Posted 3 weeks ago

Apply

2.0 - 7.0 years

6 - 10 Lacs

bengaluru

Work from Office

We are looking for Data Engineer ( AWS, Confluent & Snaplogic ) Data Integration : Integrate data from various Siemens organizations into our data factory, ensuring seamless data flow and real-time data fetching. Data Processing : Implement and manage large-scale data processing solutions using AWS Glue, ensuring efficient and reliable data transformation and loading. Data Storage : Store and manage data in a large-scale data lake, utilizing Iceberg tables in Snowflake for optimized data storage and retrieval. Data Transformation : Apply various data transformations to prepare data for analysis and reporting, ensuring data quality and consistency. Data Products : Create and maintain data products that meet the needs of various stakeholders, providing actionable insights and supporting data-driven decision-making. Workflow Management : Use Apache Airflow to orchestrate and automate data workflows, ensuring timely and accurate data processing. Real-time Data Streaming : Utilize Confluent Kafka for real-time data streaming, ensuring low-latency data integration and processing. ETL Processes : Design and implement ETL processes using SnapLogic , ensuring efficient data extraction, transformation, and loading. Monitoring and Logging : Use Splunk for monitoring and logging data processes, ensuring system reliability and performance. Youd describe yourself as: Experience : 3+ relevant years of experience in data engineering, with a focus on AWS Glue, Iceberg tables, Confluent Kafka, SnapLogic, and Airflow. Technical Skills : Proficiency in AWS services, particularly AWS Glue. Experience with Iceberg tables and Snowflake. Knowledge of Confluent Kafka for real-time data streaming. Familiarity with SnapLogic for ETL processes. Experience with Apache Airflow for workflow management. Understanding of Splunk for monitoring and logging. Programming Skills : Proficiency in Python, SQL, and other relevant programming languages. Data Modeling : Experience with data modeling and database design. Problem-Solving : Strong analytical and problem-solving skills, with the ability to troubleshoot and resolve data-related issues. Preferred Qualities: Attention to Detail : Meticulous attention to detail, ensuring data accuracy and quality. Communication Skills : Excellent communication skills, with the ability to collaborate effectively with cross-functional teams. Adaptability : Ability to adapt to changing technologies and work in a fast-paced environment. Team Player : Strong team player with a collaborative mindset. Continuous Learning : Eagerness to learn and stay updated with the latest trends and technologies in data engineering.

Posted 3 weeks ago

Apply

1.0 - 6.0 years

1 - 2 Lacs

gurugram

Work from Office

Responsibilities:- Data entry, record maintenance, and documentation.Prepare reports, invoices, and presentations.Handle emails and online communications. Provide support to different departments with computer-based tasks. Required Candidate profile Skills Required: Good typing speed and accuracy.Proficiency in MS Office and basic software.Organized and detail-oriented. hrcps9@gmail.com 8370014003

Posted 3 weeks ago

Apply

2.0 - 7.0 years

6 - 10 Lacs

bengaluru

Work from Office

We are looking for Data Engineer ( AWS, Confluent & Snaplogic ) Data Integration : Integrate data from various Siemens organizations into our data factory, ensuring seamless data flow and real-time data fetching. Data Processing : Implement and manage large-scale data processing solutions using AWS Glue, ensuring efficient and reliable data transformation and loading. Data Storage : Store and manage data in a large-scale data lake, utilizing Iceberg tables in Snowflake for optimized data storage and retrieval. Data Transformation : Apply various data transformations to prepare data for analysis and reporting, ensuring data quality and consistency. Data Products : Create and maintain data products that meet the needs of various stakeholders, providing actionable insights and supporting data-driven decision-making. Workflow Management : Use Apache Airflow to orchestrate and automate data workflows, ensuring timely and accurate data processing. Real-time Data Streaming : Utilize Confluent Kafka for real-time data streaming, ensuring low-latency data integration and processing. ETL Processes : Design and implement ETL processes using SnapLogic , ensuring efficient data extraction, transformation, and loading. Monitoring and Logging : Use Splunk for monitoring and logging data processes, ensuring system reliability and performance. Youd describe yourself as: Experience : 3+ relevant years of experience in data engineering, with a focus on AWS Glue, Iceberg tables, Confluent Kafka, SnapLogic, and Airflow. Technical Skills : Proficiency in AWS services, particularly AWS Glue. Experience with Iceberg tables and Snowflake. Knowledge of Confluent Kafka for real-time data streaming. Familiarity with SnapLogic for ETL processes. Experience with Apache Airflow for workflow management. Understanding of Splunk for monitoring and logging. Programming Skills : Proficiency in Python, SQL, and other relevant programming languages. Data Modeling : Experience with data modeling and database design. Problem-Solving : Strong analytical and problem-solving skills, with the ability to troubleshoot and resolve data-related issues. Preferred Qualities: Attention to Detail : Meticulous attention to detail, ensuring data accuracy and quality. Communication Skills : Excellent communication skills, with the ability to collaborate effectively with cross-functional teams. Adaptability : Ability to adapt to changing technologies and work in a fast-paced environment. Team Player : Strong team player with a collaborative mindset. Continuous Learning : Eagerness to learn and stay updated with the latest trends and technologies in data engineering. This role, based in Bangalore, is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future.

Posted 3 weeks ago

Apply

9.0 - 14.0 years

10 - 15 Lacs

pune

Hybrid

We are seeking Backend Engineers to play a pivotal role in building our Data & AI services Agentic Workflow Service and Retrieval-Augmented Generation (RAG) Service. In this hybrid role, you'll leverage your expertise in both backend development and AI knowledge and skills to build robust, scalable Data & AI services using AWS Kubernetes, Amazon Bedrock models. Key Requirements: Backend development experience with strong Java programming skills along with basic Python programming knowledge Design and develop microservices with Java spring boot that efficiently integrate AI capabilities Experience with microservices architecture, API design, and asynchronous programming Experience in working with Databases (SQL, NoSQL) and data structures Solid understanding of AWS services, particularly Bedrock, Lambda, and container services Experience with containerization technologies, Kubernetes and AWS serverless Understanding of RAG systems with advanced retrieval mechanisms and vector database integration Understanding of agentic workflows using technologies such as AWS Strands Framework, LangChain / LangGraph Create scalable data processing pipelines for training data and document ingestion Write clean, maintainable, and well-tested code with comprehensive documentation Collaborate with multiple multi-functional team members including DevOps, product, and frontend engineers Stay ahead of with the latest advancements in Data, LLMs and AI agent architectures 9+ years of total software engineering experience Understanding building RAG systems and working with vector databases ML/AI engineering, particularly with LLMs and generative AI applications Awareness about LangChain/LangGraph or similar LLM orchestration frameworks Understanding of ML model deployment, serving, and monitoring in production environments Knowledge of timely engineering Excellent problem-solving abilities and system design skills Strong communication skills and ability to explain complex technical concepts Ability to learn new technologies quickly Must have AWS certifications - Associate Architect / Developer / Data Engineer / AI Track Must have familiarity with streaming architectures and real-time data processing Must have developed, delivered and operated microservices on AWS Understanding of ML/AI ethics and responsible AI development Knowledge of semantic search and embedding models

Posted 3 weeks ago

Apply

0.0 - 1.0 years

1 - 2 Lacs

navi mumbai

Work from Office

• Entering customer and account data from source documents within time limits • Produce clean data files, ensure all data is accurately updated • Deliver high quality work while reliably understanding applying feedback from Team Leader Required Candidate profile • Bachelor’s Degree • A blended dynamic of self-motivation/can- do attitude, initiative-taking, flexibility and strong problem-solving skills • Strong verbal and written communication skills Perks and benefits Medical Insurance Performance Bonus

Posted 3 weeks ago

Apply

3.0 - 8.0 years

12 - 16 Lacs

bengaluru

Work from Office

Job Summary Synechron is seeking a detail-oriented and analytical Python Developer to join our data team. In this role, you will design, develop, and optimize data pipelines, analysis tools, and workflows that support key business and analytical functions. Your expertise in data manipulation, database management, and scripting will enable the organization to enhance data accuracy, efficiency, and insights. This position offers an opportunity to work closely with data analysts and scientists to build scalable, reliable data solutions that contribute directly to business decision-making and operational excellence. Software Requirements Required Skills: Python (version 3.7 or higher) with experience in data processing and scripting Pandas library (experience in large dataset manipulation and analysis) SQL (proficiency in writing performant queries for data extraction and database management) Data management tools and databases such as MySQL, PostgreSQL, or similar relational databases Preferred Skills: Experience with cloud data services (AWS RDS, Azure SQL, GCP Cloud SQL) Knowledge of additional Python libraries such as NumPy, Matplotlib, or Jupyter Notebooks for data analysis and visualization Data pipeline orchestration tools (e.g., Apache Airflow) Version control tools like Git Overall Responsibilities Develop, test, and maintain Python scripts for ETL processes and data workflows Utilize Pandas to clean, analyze, and transform large datasets efficiently Write, optimize, and troubleshoot SQL queries for data extraction, updates, and management Collaborate with data analysts and scientists to create data-driven analytic tools and solutions Automate repetitive data workflows to increase operational efficiency and reduce errors Maintain detailed documentation of data processes, pipelines, and procedures Troubleshoot data discrepancies, pipeline failures, and database-related issues efficiently Support ongoing data quality initiatives by identifying and resolving data inconsistencies Technical Skills (By Category) Programming Languages: Required: Python (3.7+), proficiency with data manipulation and scripting Preferred: Additional scripting languages such as R or familiarity with other programming environments Databases/Data Management: Relational databases: MySQL, PostgreSQL, or similar Experience with query optimization and database schema design Cloud Technologies: Preferred: Basic experience with cloud data services (AWS, Azure, GCP) for data storage and processing Frameworks and Libraries: Pandas, NumPy, Matplotlib, Jupyter Notebooks for data analysis and visualization Airflow or similar orchestration tools (preferred) Development Tools and Methodologies: Git or similar version control tools Agile development practices and collaborative workflows Security Protocols: Understanding of data privacy, confidentiality, and secure coding practices Experience Requirements 3+ years of experience in Python development with a focus on data processing and management Proven hands-on experience in building and supporting ETL workflows and data pipelines Strong experience working with SQL and relational databases Demonstrated ability to analyze and manipulate large datasets efficiently Familiarity with cloud data services is advantageous but not mandatory Day-to-Day Activities Write and enhance Python scripts to perform ETL, data transformation, and automation tasks Design and optimize SQL queries for data extraction and updates Collaborate with data analysts, scientists, and team members during daily stand-ups and planning sessions Investigate and resolve data quality issues or pipeline failures promptly Document data pipelines, workflows, and processes for clarity and future maintenance Assist in developing analytical tools and dashboards for business insights Review code changes through peer reviews and ensure adherence to best practices Participate in continuous improvement initiatives related to data workflows and processing techniques Qualifications Bachelors degree in Computer Science, Data Science, Information Technology, or a related field Relevant certifications or training in Python, data engineering, or database management are a plus Proven track record of working on data pipelines, analysis, and automation projects Professional Competencies Strong analytical and problem-solving skills with attention to detail Effective communication skills, able to collaborate across teams and explain technical concepts clearly Ability to work independently and prioritize tasks effectively Continuous learner, eager to adopt new tools, techniques, and best practices in data processing Adaptability to changing project requirements and proactive in identifying process improvements Focused on delivering high-quality work with a results-oriented approach

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies