Jobs
Interviews

4894 Data Processing Jobs - Page 43

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

0 Lacs

india

On-site

DESCRIPTION Are you interested in building high-performance, globally scalable Financial systems that support Amazon's current and future growth Are you seeking an environment where you can drive innovation leveraging the scalability and innovation with Amazon's AWS cloud services Do you have a passion for ensuring a positive customer experience This is the job for you. Amazon's Finance Technology organization (FinTech) is responsible for building and maintaining the critical finance technology applications that enable new business growth, ensure compliance with financial and tax reporting obligations, and provide deep analysis of Amazon's financial data. This function is of paramount importance to the company, as it underpins Amazon's ability to effectively manage its finances and drive continued expansion. At the heart of FinTech's mission is the General Ledger team, which builds and operates the technologies to account for and post millions of financial transactions daily to support accurate internal and external financial reporting. This team processes on average 371MM+ transactions per month, servicing the accounting needs of Finance, Accounting, and Tax teams worldwide. The work of the General ledger team is absolutely essential to meeting Amazon's critical close timelines and maintaining the integrity of the company's financial data. Amazon Financial Technology Team is looking for a results-oriented, driven software development engineer, who can help us create the next generation of distributed, scalable financial systems. Our ideal candidate thrives in a fast-paced environment, enjoys the challenge of highly complex business contexts that are typically being defined in real-time. We need someone to design and develop services that facilitate global financial transactions worth billions (USD) annually. This is a unique opportunity to be part of a mission-critical initiative with significant organizational visibility and impact. Design Foundational Greenfield Services: You will collaborate with your team to architect and implement the core services that will form the backbone of this new accounting software. Your technical expertise and innovative thinking will be instrumental in ensuring the foundational services are designed with scalability, reliability, and performance in mind for Amazon. Adopting Latest Technology: You will have the chance to work with the latest technologies, frameworks, and tools to build these foundational services. This includes leveraging advancements in areas such as cloud computing, distributed systems, data processing, and real-time analytics. Solving High-Scale Processing Challenges: This project will involve handling millions of transactions per day, presenting you with the unique challenge of designing and implementing robust, high-performance solutions that can handle this scale of volume efficiently. You will be challenged to tackle complex problems related to data processing, queuing, and real-time analytics. Cross-Functional and Senior Engineer Collaboration: You will work closely with cross-functional teams, including product managers, data engineers, and accountants. You will also be working directly with multiple Principal Engineers and presenting your work to Senior Principal Engineers. This experience will give you the opportunities and visibility to help build the required leadership skills to enhance your career. Key job responsibilities - Define high level and low level design for software solutions using the latest AWS technology in a large distributed environment. - Take the lead on defining and implementing engineering best practices and using data to define and improve operational best practices. - Help drive the architecture and technology choices for FinTech accounting products. - Design, develop and deploy medium to large software solutions for Amazon accounting needs. - Raise the bar on code quality, including security, readability, consistency, maintainability. About the team At the heart of FinTech's mission is the General Ledger team, which builds and operates the technologies to account for and post millions of financial transactions daily to support accurate internal and external financial reporting. This team processes on average 371MM+ transactions per month, servicing the accounting needs of Finance, Accounting, and Tax teams worldwide. The work of the General ledger team is absolutely essential to meeting Amazon's critical close timelines and maintaining the integrity of the company's financial data. BASIC QUALIFICATIONS - 3+ years of non-internship professional software development experience - 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience - Experience programming with at least one software programming language - Bachelor's degree PREFERRED QUALIFICATIONS - 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience - Bachelor's degree in computer science or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

15 - 20 Lacs

chennai

Work from Office

Google Cloud Platform o GCS, DataProc, Big Query, Data Flow Programming Languages o Java, Scripting Languages like Python, Shell Script, SQL Google Cloud Platform o GCS, DataProc, Big Query, Data Flow 5+ years of experience in IT application delivery with proven experience in agile development methodologies 1 to 2 years of experience in Google Cloud Platform (GCS, DataProc, Big Query, Composer, Data Processing like Data Flow) Mandatory Key Skills Google Cloud Platform,GCS,Big Query,Data Flow,Java

Posted 3 weeks ago

Apply

1.0 - 2.0 years

7 - 17 Lacs

hyderabad

Work from Office

About this role: Wells Fargo is seeking an Associate Operations Processor In this role, you will: Perform general clerical operations tasks that are routine in nature Receive, log, batch, and distribute work File, photocopy, and answer phones Prepare and distribute incoming and outgoing mail Regularly receive direction from supervisor and escalate questions and issues to more experienced roles Work under close supervision following established procedures Required Qualifications: 6+ months of operations support experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: The Virtual Keying process entails manual capturing of Checks which fail the OCR capture. The process includes capturing Amount and MICR line visible on the check image in the system to enable timely credit to Client Accounts. Candidate must be trained in high-speed typing number keying (10Key) Able to multi-task to accomplish tasks effectively. Attention to detail Ability to work quickly & accurately while maintaining acceptable standards of workmanship Quick learner with the ability to retain high volume of information Ability to recognize and escalate any discrepancies identified or noticed while processing. Review existing process in detail to identify inherent risks and work with manager/key stakeholders to incorporate controls (both manual and systematic) to enhance overall effectiveness of process. Any Graduate Freshers/ 1-2 years of experience in data entry typing data processing jobs. Job Expectations: Work shift 8:30 pm to 5:30 am (Night Shift) and 4:30 am to 1:30 pm ( Early Morning Shift) Shifts will be rotational and may include Sunday Working Process would be operational on Indian Holidays and is aligned to US holidays

Posted 3 weeks ago

Apply

2.0 - 4.0 years

7 - 17 Lacs

hyderabad

Work from Office

About this role: Wells Fargo is seeking a Operations Processor In this role, you will: Perform moderately complex operations duties in support of either a service center or department environment Require considerable knowledge of company personnel policies and practices Collect data and prepare related operational reports Prepare input forms for automated data processing system Utilize the company's internal operations to perform duties Coordinate projects Furnish information to authorized persons Provide guidance to all levels of employees regarding personnel policies and procedures requiring some policy interpretation Required Qualifications: 2+ years of operations support experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Candidate should be flexible to work in evening shifts and night shifts which begin post 5 PM IST.

Posted 3 weeks ago

Apply

2.0 - 5.0 years

10 - 14 Lacs

hyderabad

Work from Office

Research Associate - I ICRISAT seeks applications from dynamic and motivated Indian Nationals for the position of Research Associate I under the category of National Project Workforce (NPW). The incumbent will support the planning and implementation of both field-based and controlled environment experiments focused on soil-borne diseases affecting legume crops. The position is based in Patancheru, Hyderabad, Telangana. ICRISAT is a non-profit, non-political organization that conducts agricultural research for development in Asia and sub-Saharan Africa with a wide array of partners throughout the world. Covering 6.5 million square kilometers of land in 55 countries, the semi- arid or dryland tropics has over 2 billion people and 644 million of these are the poorest of the poor. ICRISAT and its partners help empower these disadvantaged populations to overcome poverty, hunger and a degraded environment through better agricultural production systems. ICRISAT is headquartered at Patancheru near Hyderabad, India, with two regional hubs and eight country offices in sub-Saharan Africa. ICRISAT envisions a prosperous, food-secure and resilient dryland tropics. Its mission is to reduce poverty, hunger, malnutrition and environmental degradation in the dryland tropics. ICRISAT conducts research on its mandate crops of chickpea, pigeonpea, groundnut, sorghum, pearl millet and finger millet in the arid and semi-arid tropics. The Institute focuses its work on the drylands and in protecting the environment. Tropical dryland areas are usually seen as resource-poor and perennially beset by shocks such as drought, thereby trapping dryland communities in poverty and hunger and making them dependent on external aid. Please visit - www.icrisat.org Responsibilities: Assist in the design of experiments, surveys, development of data collection tools, such as questionnaires, and interview guides related to crop production and protection Analyze the impacts of climate change on crop pest and disease dynamics and assess implications for agroecological services using historical datasets and climate projections Manage large-scale data integration workflows involving AI, image processing field observations to support research and decision-making for crop stress detection and management Calibrate and validate crop and pest models using multi-source data to contribute to scenario-based early warning systems and policy advisory tools Contribute to research on climate change impacts on agriculture, adaptation planning, crop loss assessment due to pests, and the advancement of citizen science approaches Integrate diverse datasets, including microbial, phenotypic, agronomic, and genomic data to identify functional relationships and potential biomarkers Prepare and disseminate scientific reports, peer-reviewed publications to communicate findings to scientific and policy audiences Assist in research proposal development, project management, and stakeholder communication to ensure effective delivery and impact of ongoing initiatives Perform additional research duties as assigned by the supervisor Essential Criteria: Master s or Ph.D. in Crop Protection, Agricultural Engineering, Environmental Sciences, Computer Science, or a closely related field Strong experience in data analysis, pest & disease modelling, and AI and digital tools Proficiency in Python, R, or similar tools for data processing and visualization Experience in citizen science, farmer engagement, or participatory digital tools Background in climate resilience, contributing to science-policy interface or international agricultural development projects Experience handling omics and genomic datasets; understanding of bioinformatics or phenotyping is an advantage Strong communication skills and experience working in interdisciplinary, multi-stakeholder environments NET qualification is preferred

Posted 3 weeks ago

Apply

3.0 - 7.0 years

8 - 9 Lacs

kolkata, mumbai, new delhi

Work from Office

Position: Data Engineer- Databricks Purpose of the Position: Develop, support and steer end-to-end business intelligence using Databricks. Location: Nagpur/ Pune/ Chennai/ Bangalore Key Responsibilities: Work with business analysts and data architects to translate requirements into technical implementations. Design, develop, implement, and maintain pyspark code through Databricks UI that enables data and analytics use cases for the client. Code, test, and document new or modified data systems to create robust and scalable applications for data analytics. Dive deep into performance, scalability, capacity, and reliability problems to resolve issues. Take on research projects and POC to improve data processing. Work and Technical Experience: Must-Have Skills: 3+ hands on Experience with Databricks and PySpark. Proficiency in SQL and data manipulation skills. Good understanding of data warehousing concepts and technologies. Good-to-Have Skills: Understanding of Google Pub sub/Kafka/Mongo DB. Familiarity with ETL processes and tools for data extraction, transformation, and loading. Knowledge of cloud platforms like Databricks, Snowflake, Google Cloud. Familiarity with data governance and data quality best practices. Qualifications: Bachelors degree in computer science, engineering, or related field. Demonstrated continued learning through one or more technical certifications or related methods. 3+ years of relevant experience in Data Engineering. Qualities: Self-motivated and focused on delivering outcomes for a fast-growing team and firm. Able to communicate persuasively through speaking, writing, and client presentations.

Posted 3 weeks ago

Apply

2.0 - 4.0 years

1 - 1 Lacs

tarapur, boisar

Work from Office

1) Responsible for data entries, handling petty cash and other accounting activities 2) Preparing MIS Reports and analyzing data 3) Inventory Management

Posted 3 weeks ago

Apply

2.0 - 7.0 years

7 - 17 Lacs

bengaluru

Work from Office

About this role: Wells Fargo is seeking a Data Product Integrity Specialist In this role, you will: Act as a data steward; participate in low to moderate complexity data integrity and quality initiatives and reviews of business systems; identify opportunities for process improvement with business systems Review and analyze basic data quality audits with low to moderate risk; research and resolve data quality issues related to systems, data sets and data connections Present recommendations for resolving low to moderate complexity data quality issues and exercise independent judgment while developing expertise in the data quality standards and control framework; escalate and remediate low to moderate complexity data quality issues Collaborate and consult with business, risk partners, and technical staff, including Enterprise Data Management and Technology functions, to triage and remediate data quality issues Ensure compliance with Enterprise Data Management policies, standards, and technology and facilitate execution of business data standards and control frameworks Understand the complete data lifecycle from data creation and capture to data processing, storage, and usage Identify and track end-to-end data lineage using traditional methods and systemic tools Actively engage in identifying solutions to improve data and implement data quality improvement plans Required Qualifications: 2+ years of data quality or data management experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Data Transformation (for example: business analyst) Finance Capital Markets product Product owner/back up product owner exposure (agile-scrum) Very good spontaneous communication Exposure to working with senior leadership, critical thinking problem solving abilities US Banking Regulatory reporting exposure is also beneficial (For example: FRY, FR2052a, FFIEC009) SQL working experience

Posted 3 weeks ago

Apply

0.0 - 4.0 years

2 - 6 Lacs

gurugram

Work from Office

How will you make an impact in this role With a focus on digitization, innovation, and analytics, the Enterprise Digital teams creates central, scalable platforms and customer experiences to help markets across all of these priorities. Charter is to drive scale for the business and accelerate innovation for both immediate impact as well as long-term transformation of our business. A unique aspect of Enterprise Digital Teams is the integration of diverse skills across all its remit. Enterprise Digital Teams has a very broad range of responsibilities, resulting in a broad range of initiatives around the world. The American Express Enterprise Digital Center of Excellence (ED COE) leads the Enterprise Product Analytics and Experimentation charter for Brand Performance Marketing and Digital Acquisition Membership experiences as well as Enterprise Platforms. The focus of this collaborative team is to drive growth by enabling efficiencies in paid performance channels evolve our digital experiences with actionable insights analytics. The team specializes in using data around digital product usage to drive improvements in the acquisition customer experience to deliver higher satisfaction and business value. About this Role: This role will report to the Manager of International Acquisition experience analytics team within Enterprise Digital COE (ED COE) and will be based in Gurgaon. The candidate will be responsible for delivery of highly impactful analytics to optimize our Digital Acquisition Experiences across International Markets (Shop, Apply, GO2 etc) Deliver strategic analytics focused on Digital Acquisition experiences across International Markets aimed at optimizing our Customer experiences Define and build key KPIs to monitor the acquisition journey performance and success Support the development of new products and capabilities Deliver read out of experiments uncovering insights and learnings that can be utilized to further optimize the customer journey Gain deep functional understanding of the enterprise-wide product capabilities and associated platforms over time and ensure analytical insights are relevant and actionable Power in-depth strategic analysis and provide analytical and decision support by mining digital activity data along with AXP closed loop data Minimum Qualifications Advanced degree in a quantitative field (e.g. Finance, Engineering, Mathematics, Computer Science) Strong programming skills are preferred. Some experience with Big Data programming languages (Hive, Spark), Python, SQL. Experience in large data processing and handling, understanding in data science is a plus. Ability to work in a dynamic, cross-functional environment, with strong attention to detail. Excellent communication skills with the ability to engage, influence, and encourage partners to drive collaboration and alignment. Preferred Qualifications Strong analytical/conceptual thinking competence to solve unstructured and complex business problems and articulate key findings to senior leaders/partners in a succinct and concise manner. Basic knowledge of statistical techniques for experimentation hypothesis testing, regression, t-test, chi-square test.

Posted 3 weeks ago

Apply

1.0 - 3.0 years

3 - 5 Lacs

gurugram

Work from Office

Execute data processing, cleansing, transformation, and analysis of procurement datasets across hotel and corporate functions using GCP, SQL, Tableau Prep, Excel etc. Develop and maintain Tableau dashboards and visualizations to support reporting needs from Procurement Excellence, Corporate Procurement, and Regional Hotel teams. Support month-end, in-month, and ad hoc reporting cycles in line with defined SLAs and agreed standards. Collaborate with senior stakeholders to interpret reporting requirements and deliver insight-led outputs aligned with business goals. Maintain documentation and standard operating procedures (SOPs) for repeatable analytics and reporting activities. Identify data anomalies, proactively flag risks, and recommend actions to improve data quality and operational reporting. Partner with the GP Ops Senior Manager and existing Digital Reporting & Analytics team to triage intake requests, align priorities, and ensure quality assurance of deliverables. Contribute to knowledge-sharing, continuous improvement, and the expansion of a self-service reporting culture. Education: Bachelor s degree in Computer Science, Information Technology, Data Science, Analytics or a related field, or equivalent experience. Experience & Expertise: 1-3 years in a reporting or analytics role, ideally within procurement or finance functions Experience with large datasets, Excel (advanced), SQL, and Tableau Familiarity with cloud data platforms such as Google Cloud Platform (BigQuery, Cloud Storage) Experience working in a virtual or global team environment Preferred: Smartsheet, PowerPoint, ticketing/intake systems Technical Knowledge: Data transformation and cleaning techniques Business-friendly data visualization principles Understanding of procurement data domains (spend, supplier, PO/invoice, category taxonomy)

Posted 3 weeks ago

Apply

1.0 - 4.0 years

3 - 6 Lacs

bengaluru

Work from Office

POS-P322 We re seeking a Senior Software Engineer to join a high-impact, new initiative within the Breeze Assistant team at HubSpot. Breeze Assistant is a fully conversational AI chief of staff designed specifically for Go-to-Market (GTM) teams. By integrating frontier AI capabilities with rich data from HubSpot and third-party applications, Breeze provides an AI assistant that fundamentally transforms team collaboration and productivity. As a Senior Engineer on the Breeze Assistant team, you ll do many things and you likely already have experience doing some of them in the past. We are looking for people who: Have a proven track record of delivering high-value projects, including collaborating across multiple teams when needed. Senior Engineers at HubSpot are experienced individual contributors who help drive our product vision forward by collaborating closely with team members and contributing hands-on code. Bring solid backend engineering experience. A strong understanding of technologies like Java, Node.js, MySQL, Kafka, and cloud-native architectures is beneficial. We are looking for candidates who are well-versed in engineering principles and excited to solve challenging problems, rather than deep specialists in any single technology. Have direct experience building new user-facing products from the ground up, ideally including product integrations and complex, data-driven applications. Familiarity or hands-on experience with Generative AI technologies (LLMs, Agents, RAG, etc.) is a strong plus. A demonstrated interest in working on AI-driven products is highly desirable. Demonstrate strong problem-solving and pragmatic engineering skills, balancing immediate needs with longer-term technical architecture. Thrive in dynamic, fast-paced environments and are comfortable adapting to evolving requirements. Understand secure, compliant application development best practices, particularly for data privacy, governance, and API integrations. Make significant contributions to technical decisions and help solve challenges related to context, user interfaces, real-time features, data governance, and application performance. Support and help mentor team members in your area of expertise when opportunities arise Help build and scale the engineering culture at HubSpot India. Join us in creating the AI assistant that defines the future of work for GTM teams. We know the confidence gap and impostor syndrome can get in the way of meeting spectacular candidates, so please don t hesitate to apply we d love to hear from you. If you need accommodations or assistance due to a disability, please reach out to us using this form . At HubSpot, we value both flexibility and connection. Whether you re a Remote employee or work from the Office, we want you to start your journey here by building strong connections with your team and peers. If you are joining our Engineering team, you will be required to attend a regional HubSpot office for in-person onboarding. If you join our broader Product team, you ll also attend other in-person events such as your Product Group Summit and other gatherings to continue building on those connections. If you require an accommodation due to travel limitations or other reasons, please inform your recruiter during the hiring process. We are committed to supporting candidates who may need alternative arrangements Massachusetts Applicants: It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability. Germany Applicants: (m/f/d) - link to HubSpots Career Diversity page here . India Applicants: link to HubSpot Indias equal opportunity policy here . About HubSpot HubSpot (NYSE: HUBS) is an AI-powered customer platform with all the software, integrations, and resources customers need to connect marketing, sales, and service. HubSpots connected platform enables businesses to grow faster by focusing on what matters most: customers. At HubSpot, bold is our baseline. Our employees around the globe move fast, stay customer-obsessed, and win together. Our culture is grounded in four commitments: Solve for the Customer, Be Bold, Learn Fast, Align, Adapt & Go!, and Deliver with HEART. These commitments shape how we work, lead, and grow. We re building a company where people can do their best work . We focus on brilliant work, not badge swipes. By combining clarity, ownership, and trust, we create space for big thinking and meaningful progress. And we know that when our employees grow, our customers do too. Recognized globally for our award-winning culture by Comparably, Glassdoor, Fortune, and more, HubSpot is headquartered in Cambridge, MA, with employees and offices around the world. Explore more: HubSpot Careers Life at HubSpot on Instagram By submitting your application, you agree that HubSpot may collect your personal data for recruiting, global organization planning, and related purposes. Refer to HubSpots Recruiting Privacy Notice for details on data processing and your rights.

Posted 3 weeks ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

chennai

Work from Office

ZoomInfo is where careers accelerate. We move fast, think boldly, and empower you to do the best work of your life. You ll be surrounded by teammates who care deeply, challenge each other, and celebrate wins. With tools that amplify your impact and a culture that backs your ambition, you won t just contribute. You ll make things happen fast. ZoomInfo is looking for a Data Analyst II to join our Data Operations and Analysis team. This position supports our broader Data team, and works closely with Product, Engineering, and Research. Key aspects of this role include supporting strategic data infrastructure decisions, discovering and driving improvements in our data processing, and providing timely and thorough analysis. You will have the opportunity to influence the future success of ZoomInfo s data assets. What you ll do Develop a deep and comprehensive understanding of the data and infrastructure we work with. Summarize and document different aspects of the current state, probing for opportunities, and charting the path forward with solutions to improve the accuracy and volume of the data we serve to our customers. Collaborate with our Data Engineering, Product Management, and Research counterparts to drive forward the improvement opportunities you ve identified Inquisitive - You are curious about ZoomInfo s product, market and data operations, and eager to develop a thorough understanding of our data and infrastructure. You pursue technical training and development opportunities, and strive to continuously build knowledge and skills. A Problem Solver - You have strong problem solving and troubleshooting skills with the ability to exercise good judgment in ambiguous situations. A Team Player - You are willing to tackle new challenges and enjoy facilitating and owning cross-functional collaboration. Within the team, you seek opportunities to mentor and coach junior analysts. Self-Directed - You enjoy working independently. There will be guidance and resources when you need it, but we re looking for someone who will thrive with the freedom to explore our data, propose solutions, and drive execution and results What you ll bring 2-5 years of experience in analytics, quantitative research, and/or data operations Knowledge of database, ETL development and challenges posed by data quality Ability to summarize complex analyses in a simple, intuitive format, and to present findings in a clear and concise manner to both technical and non-technical stakeholders Strong understanding of AI technologies, including machine learning, natural language processing, and data analytics. Must be detail oriented with strong organizational and analytical skills Strong initiative and ability to manage multiple projects simultaneously Exposure to and technical understanding of working with data at scale Experience in a product or project-driven work environment preferred Advanced level SQL, Python, BigQuery, Excel skills handling large-scale complex datasets Experience building, deploying AI / ML solutions Familiarity with data visualization tools, e.g. Tableau / Looker Studio Experience with DataBricks, AWS, or Airflow is preferred but not mandatory

Posted 3 weeks ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

mumbai

Work from Office

Key ResponsibilitiesUnderstand business requirements by engaging with business teams Data extraction from valuable data sources & automating data collection process Data processing, cleaning and validating integrity of data to be used for analysis Exploratory data analysis to identify trends and patterns in large amount of data Build machine learning based models using algorithms and statistical techniques like Regression, Decision trees, Boosting etc Present insights using data visualization techniques Propose solutions and strategies to various complex business challengesBuild GenAI models using RAG frameworks for chatbots, summarisation etc Develop model deployement pipline using Lambda, ECS etcSkills & AttributesKnowledge of statistical programming languages like R, Python and database query languages like SQL and statistical tests like distributions, regression etc Experience in data visualization tools like tableau, QlikSense etc Ability to write comprehensive reports, with an analytical mind and inclination for problem-solving Exposure in advanced techniques like GenAI, neural networks, NLP, image and speech processing Ability to engage with stakeholders to understand business requirements and convert the same into technical problems for solution development and deployment

Posted 3 weeks ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

hyderabad

Work from Office

We are looking for an experienced Software Engineer with experience in AWS cloud services and Distributed Systems. You should have a foundation in software development and experience using cloud and Big Data platforms to build scalable and efficient solutions. You will be reporting to a Senior Manager. You will WFO 2 days a week from Hyderabad. Responsibilities: Collaborate with teams to understand requirements and design software solutions that use AWS cloud services and Big Data technologies. Develop and maintain scalable and reliable software applications using programming languages such as Java, Python, or Scala. Use AWS services including EC2, S3, Lambda, Athena, RDS, DynamoDB, EMR, and others to build cloud-native applications and microservices. Design and develop data processing pipelines using Big Data frameworks like Hadoop, Spark, Kafka, Presto and Hive. Implement monitoring, logging, and alerting solutions to ensure system reliability, availability, and performance. Collaborate with DevOps teams to automate deployment processes and ensure smooth integration of software components. About Experian Experience and Skills Bachelors or Masters degree in Computer Science, Engineering, or related field Experience as a Software Engineer, with at least 2- 5 years of experience developing software applications. Proficiency in programming languages such as Java, Python, or Scala, and familiarity with software development methodologies and best practices Hands-on experience building Big Data applications with AWS cloud services and infrastructure, including compute, storage, networking, and security Develop custom operators and sensors to extend Airflow functionality and support specific use cases Develop Batch applications using Spark & Scala or PySpark

Posted 3 weeks ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

kolkata

Work from Office

Not Applicable Specialism Data, Analytics AI Management Level Senior Associate Summary In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Responsibilities Design and build data pipelines Data lakes to automate ingestion of structured and unstructured data that provide fast, optimized, and robust endtoend solutions Knowledge about the concepts of data lake and dat warehouse Experience working with AWS big data technologies Improve the data quality and reliability of data pipelines through monitoring, validation and failure detection. Deploy and configure components to production environments Technology Redshift, S3, AWS Glue, Lambda, SQL, PySpark, SQL. Mandatory skill sets AWS Data Engineer Preferred skill sets AWS Data Engineer Years of experience required 48 Education qualification Btech/MBA/MCA Education Degrees/Field of Study required Bachelor of Engineering, Bachelor of Technology Degrees/Field of Study preferred Required Skills Data Engineering Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Travel Requirements Government Clearance Required

Posted 3 weeks ago

Apply

2.0 - 7.0 years

4 - 9 Lacs

bengaluru

Work from Office

The Data Coordinator focuses on initiating, investigating and resolving queries based on discrepancies identified by the database in accordance with Clario s SOPs and SWIs. They are also responsible for maintaining organized study documentation throughout the lifecycle of a study ESSENTIAL DUTIES AND RESPONSIBILITIES: Support department workflow by taking action on discrepant data through investigation and issuance of queries to sites/sponsors to verify/obtain demographic and visit information on source data in accordance with departmental SOPs and SWIs Investigate and enter resolutions and/or revisions received across operational systems Process source data across operational systems Files source documents once workflow is complete Maintain departmental metrics in accordance with goal plan Participate in required training programs Report any equipment and/or system problems Maintain accurate and complete Data Coordination files as defined by the department s SOPs Archive closed studies upon notification of study lock within requested time frame Assist as requested with data reconciliation activities for your studies ensuring that database updates are completed within the Sponsor-requested timelines. OTHER DUTIES AND RESPONSIBILITIES: Attend Project Assurance meetings and outline feedback to Data Coordination Support the training of new temporary employees QUALIFICATIONS AND SKILLS NEEDED: (Key wording should include if degree is needed, any travel requirements, special qualifications needed, skills, etc.) Education : BS or BA degree in life sciences or related field preferred Experience : 2 years company or related data processing experience Good organizational, problem solving and communication skills Demonstrated computer proficiency in MS word, Excel and Outlook Detail orientated with good proofreading skills Proficient data entry/typing skills

Posted 3 weeks ago

Apply

3.0 - 6.0 years

5 - 8 Lacs

hyderabad

Work from Office

Senior Data Developer, Hyderabad We are seeking a Senior Data Developer with strong skills in AWS serverless computing, data processing, and automation. The ideal candidate will have hands-on experience with AWS Lambda, Amazon Athena, and Python, and a proven ability to design, develop, and maintain robust data solutions in a cloud-based environment. Key Responsibilities Design, build, and maintain data pipelines and processing workflows using AWS Lambda and related AWS services. Query and process large datasets using Amazon Athena and optimise for performance and cost efficiency. Develop automation scripts and data transformation logic using Python . Integrate data from multiple sources and ensure data quality, security, and compliance. Collaborate with data engineers, analysts, and business stakeholders to deliver data-driven solutions. Implement monitoring and alerting for data workflows to ensure reliability. Contribute to best practices for data architecture, coding standards, and documentation. Skills & Experience Proven experience as a Data Developer, Data Engineer, or similar role. Strong hands-on expertise with AWS Lambda and Amazon Athena . Proficiency in Python for automation, ETL, and data processing tasks. Experience with AWS services such as S3, Glue, CloudWatch, and IAM. Familiarity with SQL and data modelling concepts. Understanding of cloud security and compliance best practices. Strong problem-solving skills and ability to work in Agile environments. Desirable AWS certification (e.g., AWS Certified Data Analytics Specialty or AWS Certified Developer Associate). Experience with big data frameworks or analytics platforms. Exposure to real-time data streaming (e.g., Kinesis). #SeniorDataDeveloper #DataDeveloper #DataEngineer #AWSLambda #AmazonAthena #PythonDeveloper #PythonDataEngineer #DataProcessing #ServerlessComputing #CloudDataEngineer #AWSDataEngineer #ETLDeveloper #DataPipelines #AWSGlue #AmazonS3 #CloudWatch #AWSIAM #SQLDeveloper #DataAnalytics #CloudDataSolutions #hyderabadDataJobs #hyderabadTechJobs #hyderabadAWSJobs #hyderabadDeveloperJobs #BigDataEngineer #DataAutomation #DataArchitecture #DataIntegration

Posted 3 weeks ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

bengaluru

Work from Office

Backend Engineer - Python We love technology, we love design, and we love quality. Our diversity makes us unique and creates an inclusive and welcoming workplace where each individual is highly valued. We are looking for you who is immediate joiner and want to grow with us! With us, you have great opportunities to take real steps in your career and the opportunity to take great responsibility Job Title: Backend Engineer - Python Location: Bangalore / Hybrid Experience: 4-6 years Employment Type: Full-time About the Role We are looking for a talented Backend Engineer with strong expertise in Python, SQL, and modern backend frameworks. The role involves building scalable microservices, working with big data processing, and deploying solutions on cloud platforms. Key Responsibilities Design, develop, and maintain backend services using Python (FastAPI) . Work with PySpark for large-scale data processing. Build and optimize microservices-based architectures . Write efficient SQL queries and ensure data reliability. Deploy and manage applications on GCP or Azure . Implement containerization using Docker . Collaborate with cross-functional teams to deliver high-quality solutions. Requirements 4-6 years of backend development experience. Strong proficiency in Python and SQL . Hands-on experience with FastAPI and PySpark . Solid understanding of microservices frameworks . Experience with GCP or Azure cloud platforms. Strong knowledge of Docker . Good problem-solving and communication skills. Nice to Have Experience with Kubernetes and CI/CD pipelines. Exposure to large-scale distributed systems or data engineering. Location: Bangalore Start Date: Immediate Work Mode: Hybrid Language Requirement: English (Excellent written and verbal skills) Form of employment: Full-time until further notice, we apply 6 months probationary employment. We interview candidates on an ongoing basis, do not wait to submit your application.

Posted 3 weeks ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

gurugram

Work from Office

The mission of the Finance Platform Strategies (FPS) team is to ensure optimized operating models within Finance, consistent across our global functional teams as appropriate To promote this objective, we connect key Finance resources with relevant functional specialists from across BlackRock, identify and document business requirements and manage the implementation and subsequent maintenance of platforms and processes designed to address those requirements FPS team members serve as internal change management consultants who apply knowledge of BlackRock s resources - people, processes, and technology - to ensure concepts become reality in support of both new business initiatives and ongoing business process enhancement Team Overview The FPS team has a global footprint Team members are responsible for managing projects that drive Finance objectives forward through initiatives that can have either regional or global focus Responsibilities include coordinating project team activity, supporting effective and timely communication among project team members and subject matter experts from across the firm, drafting clear and comprehensive business requirements and project management documentation, designing sound processes and workflow models, partnering with internal or third party resources to drive design specification sign-off, oversee technical development progress and coordinate quality unit testing, providing ongoing project updates to all relevant stakeholders, facilitating user training services and, ensuring timely delivery of effective, well designed solutions In addition to project oversight, the team works to manage, on an ongoing basis, the platforms supporting Finance s day-to-day operating model to ensure our technologies remain optimized and our operational processes are sustainable, efficient and support a robust control environment The team provides level one support for Finance operational platforms and as required, will assess/trouble-shoot and identify actionable opportunities to address complex issues related to the suite of technology solutions Finance employs including Oracle Cloud SaaS platform, Financial Consolidations and Close (FCCS), IBM Cognos Planning Analytics (TM1), IBM Cognos Analytics (BI) and our proprietary BlackRock Aladdin platform Role Responsibility: Development and Maintenance: Develop and maintain TM1 models and applications, including budgeting, forecasting, and actual reports Provide ongoing maintenance and enhancements to existing models Data Processing: Manage daily, weekly, monthly, and quarterly data processing to support actuals and forecast/budget close processes for the TM1 platform Requirements Gathering: Lead the development and enhancement work for various TM1 models throughout all phases, from requirements gathering to build, user testing, go-live, and support Cloud Migration: Play an active role in cloud migration and model transformation initiatives Report Conversion: Lead report conversion from TM1 perspectives to Planning Analytics for Excel (PAfE) and Planning Analytics Workspace (PAW) User Support: Provide day-to-day user support for the TM1 application, ensuring the accuracy and integrity of data and reports Documentation: Create and maintain process documentation to ensure it stays current as the process evolves Stakeholder Collaboration: Collaborate with internal and external stakeholders to understand business Experience: Required 4-6 years of relevant experience working as IBM Planning Analytics (TM1) developer Proficiency in TM1 Rules, Feeders, Turbo Integrator processes, and system configuration Experience with SQL queries and stored procedures Bachelors degree in finance, IT, or similar fields are preferred Knowledge of financial instruments and markets is beneficial Desired: Good understanding of Finance / Asset Management industry Strong written and verbal communication skills are crucial for this role Natural curiosity and interest in finance data and technologies to optimize finance processes from end to end Masters Degree in Finance or IT Personal Qualities: Strong work ethic and accountability owner Self-starter able to drive positive progress proactively with limited manager direction Solutions and service oriented Focused attention to detail; high standards for quality and accuracy in work product Professional, positive, collegial demeanor; collaborative relationship builder Comfortable interacting with all levels of management and able to thrive in a fast paced, innovative environment This mission would not be possible without our smartest investment the one we make in our employees It s why we re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: wwwlinkedincom / company / blackrock BlackRock is proud to be an Equal Opportunity Employer We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law

Posted 3 weeks ago

Apply

4.0 - 7.0 years

6 - 9 Lacs

hyderabad

Work from Office

Experience Required Moderate experience : Minimum 24 months overall. At least 12 months experience in one or more of the following: Software Testing Bug Triaging Audits / Quality Checks Technical Issue Resolution Subject Matter Expertise Key Responsibilities Take end-to-end ownership of assigned responsibilities to maintain high program health. Manage work allocation and ensure defined targets are met/exceeded (Productivity, Quality, SLA, Efficiency, Utilization). Ensure process adherence and identify process gaps for improvements. Conduct regular quality audits . Handle policy, training, reporting, and quality management where no separate POCs exist. Perform root cause analysis (Fishbone, RCA, 5-Whys, etc.) to resolve issues effectively. Identify and escalate high-impact issues quickly with minimal downtime. Manage multiple responsibilities while ensuring core duties are completed. Skills & Competencies Strong proficiency in MS Office / Google Suite . Basic knowledge of SQL and experience with JIRA or similar ticketing tools . Proficient in Excel/Google Sheets (Pivot Tables, VLOOKUP, Data Processing). Good knowledge of data analysis techniques . Excellent logical reasoning, problem-solving, and attention to detail . Strong English reading comprehension and writing skills (concise & accurate). Ability to read and interpret complex SOPs . High capability to perform repetitive tasks with accuracy . Ability to memorize technical/engineering terminologies and project details. Familiarity with smartphones, test platforms, and navigation tools . Max_Experience":"4

Posted 3 weeks ago

Apply

4.0 - 7.0 years

6 - 9 Lacs

bengaluru

Work from Office

Position Overview Job Title: Software Development Engineer 2 Department: Technology Location: Bangalore, India Reporting To: Senior Research Manager - Data Position Purpose The Research Engineer Data will play a pivotal role in advancing TookiTaki s AI-driven compliance and financial crime prevention platforms through applied research, experimentation, and data innovation. This role is ideal for professionals who thrive at the intersection of research and engineering, turning cutting-edge data science concepts into production-ready capabilities that enhance TookiTaki s competitive edge in fraud prevention, AML compliance, and data intelligence. The role exists to bridge research and engineering by: Designing and executing experiments on large, complex datasets. Prototyping new data-driven algorithms for financial crime detection and compliance automation. Collaborating across product, data science, and engineering teams to transition research outcomes into scalable, real-world solutions. Ensuring the robustness, fairness, and explainability of AI models within TookiTaki s compliance platform. Key Responsibilities Applied Research & Prototyping Conduct literature reviews and competitive analysis to identify innovative approaches for data processing, analytics, and model development. Build experimental frameworks to test hypotheses using real-world financial datasets. Prototype algorithms in areas such as anomaly detection, graph-based analytics, and natural language processing for compliance workflows. Data Engineering for Research Develop data ingestion, transformation, and exploration pipelines to support experimentation. Work with structured, semi-structured, and unstructured datasets at scale. Ensure reproducibility and traceability of experiments. Algorithm Evaluation & Optimization Evaluate research prototypes using statistical, ML, and domain-specific metrics. Optimize algorithms for accuracy, latency, and scalability. Conduct robustness, fairness, and bias evaluations on models. Collaboration & Integration Partner with data scientists to transition validated research outcomes into production-ready code. Work closely with product managers to align research priorities with business goals. Collaborate with cloud engineering teams to deploy research pipelines in hybrid environments. Documentation & Knowledge Sharing Document experimental designs, results, and lessons learned. Share best practices across engineering and data science teams to accelerate innovation. Qualifications and Skills Education Required: Bachelor s degree in Computer Science, Data Science, Applied Mathematics, or related field. Preferred: Master s or PhD in Machine Learning, Data Engineering, or a related research intensive field. Experience Minimum 4 7 years in data-centric engineering or applied research roles. Proven track record of developing and validating algorithms for large-scale data processing or machine learning applications. Experience in financial services, compliance, or fraud detection is a strong plus. Technical Expertise Programming: Proficiency in Scala, Java, or Python. Data Processing: Experience with Spark, Hadoop, and Flink. ML/Research Frameworks: Hands-on with TensorFlow, PyTorch, or Scikit-learn. Databases: Experience with both relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Cassandra, ElasticSearch). Cloud Platforms: Experience with AWS (preferred) or GCP for research and data pipelines. Tools: Familiarity with experiment tracking tools like MLflow or Weights & Biases. Application Deployment: Strong experience with CI/CD practices, Containerized Deployments through Kubernetes, Docker, etc. Streaming frameworks: Strong experience in creating highly performant and scalable real time streaming applications with Kafka at the core Data Lakehouse: Experience with one of the modern data lakehouse platforms/formats such as Apache Hudi, Iceberg, Paimon is a very strong plus. Soft Skills Strong analytical and problem-solving abilities. Clear, concise communication skills for cross-functional collaboration. Adaptability in fast-paced, evolving environments. Curiosity-driven with a bias towards experimentation and iteration. Key Competencies Innovation Mindset : Ability to explore and test novel approaches that push boundaries in data analytics. Collaboration: Works effectively with researchers, engineers, and business stakeholders. Technical Depth: Strong grasp of advanced algorithms and data engineering principles. Problem Solving: Dives deep into the logs, metrics and code and identifying problems, opportunities for performance tuning and optimization Ownership: Drives research projects from concept to prototype to production. Adaptability: Thrives in ambiguity and rapidly changing priorities. Preferred Certifications in AWS Big Data, Apache Spark, or similar technologies. Experience in compliance or financial services domains. Success Metrics Research to Production Conversion: % of validated research projects integrated into TookiTaki s platform. Model Performance Gains: Documented improvements in accuracy, speed, or robustness from research initiatives. Efficiency of Research Pipelines: Reduced time from ideation to prototype completion. Collaboration Impact: Positive feedback from cross-functional teams on research integration. Benefits Competitive Salary : Aligned with industry standards and experience. Professional Development : Access to training in big data, cloud computing, and data integration tools. Comprehensive Benefits : Health insurance and flexible working options. Growth Opportunities : Career progression within Tookitaki s rapidly expanding Services Delivery team. Introducing Tookitaki Tookitaki: The Trust Layer for Financial Services Tookitaki is transforming financial services by building a robust trust layer that focuses on two crucial pillars: preventing fraud to build consumer trust and combating money laundering to secure institutional trust. Our trust layer leverages collaborative intelligence and a federated AI approach, delivering powerful, AI-driven solutions for real-time fraud detection and AML (Anti-Money Laundering) compliance. How We Build Trust: Our Unique Value Propositions AFC Ecosystem Community-Driven Financial Crime Protection The Anti-Financial Crime (AFC) Ecosystem is a community-driven platform that continuously updates financial crime patterns with real-time intelligence from industry experts. This enables our clients to stay ahead of the latest money laundering and fraud tactics. Leading digital banks and payment platforms rely on Tookitaki to protect them against evolving financial crime threats. By joining this ecosystem, institutions benefit from the collective intelligence of top industry players, ensuring robust protection. FinCense End-to-End Compliance Platform Our FinCense platform is a comprehensive compliance solution that covers all aspects of AML and fraud prevention from name screening and customer due diligence (CDD) to transaction monitoring and fraud detection. This ensures financial institutions not only meet regulatory requirements but also mitigate risks of non-compliance, providing the peace of mind they need as they scale. Industry Recognition and Global Impact Tookitaki s innovative approach has been recognized by some of the leading financial entities in Asia. We have also earned accolades from key industry bodies such as FATF and received prestigious awards like the World Economic Forum Technology Pioneer, Forbes Asia 100 to Watch, and Chartis RiskTech100. Serving some of the world s most prominent banks and fintech companies, Tookitaki is continuously redefining the standards of financial crime detection and prevention, creating a safer and more trustworthy financial ecosystem for everyone. ",

Posted 3 weeks ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

chennai

Work from Office

Job Description We are looking for a highly skilled Lead Data Analyst with strong expertise in Data Warehousing & Analytics to join our team. The ideal candidate will have extensive experience in designing and managing data solutions, advanced SQL proficiency, and hands-on expertise in Python. Key Responsibilities: Design, develop, and maintain scalable data warehouse solutions. Write and optimize complex SQL queries for data extraction, transformation, and reporting. Develop and automate data pipelines using Python. Work with AWS cloud services for data storage, processing, and analytics. Collaborate with cross-functional teams to provide data-driven insights and solutions. Ensure data integrity, security, and performance optimization Qualifications 5- 7 years of experience in Data Warehousing & Analytics. Strong proficiency in writing complex SQL queries with deep understanding of query optimization, stored procedures, and indexing. Hands-o

Posted 3 weeks ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

pune

Work from Office

":" Agivant is seeking an experienced Modern Microservice Developer to join our team and contribute to the design, development, and optimization of scalable microservices and data processing workflows. The ideal candidate will have expertise in Python, containerization, and orchestration tools, along with strong skills in SQL and data integration. Key Responsibilities: Develop and optimize data processing workflows and large-scale data transformations using Python. Write and maintain complex SQL queries in Snowflake to support efficient data extraction, manipulation, and aggregation. Integrate diverse data sources and perform validation testing to ensure data accuracy and integrity. Design and deploy containerized applications using Docker, ensuring scalability and reliability. Build and maintain RESTful APIs to support microservices architecture. Implement CI/CD pipelines and manage orchestration tools such as Kubernetes or ECS for automated deployments. Monitor and log application performance, ensuring high availability and quick issue resolution. Requirements Mandatory: Bachelors degree in Computer Science, Engineering, or a related field. 5-8 years of experience in Python development, with a focus on data processing and automation. Proficiency in SQL, with hands-on experience in Snowflake. Strong experience with Docker and containerized application development. Solid understanding of RESTful APIs and microservices architecture. Familiarity with CI/CD pipelines and orchestration tools like Kubernetes or ECS. Knowledge of logging and monitoring tools to ensure system health and performance. Preferred Skills: Experience with cloud platforms (AWS, Azure, or GCP) is a plus. ","Work_Experience":"5-8 years (Senior Engineer)","Job_Type":"Full time" , "Job_Opening_Name":"Python Microservice Developer" , "State":"Maharashtra" , "Country":"India" , "Zip_Code":"411045" , "id":"86180000007532968" , "Publish":true , "Date_Opened":"2025-08-05" , "Keep_on_Career_Site":false}]);

Posted 3 weeks ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

pune

Work from Office

[{"Remote_Job":false , "Posting_Title":"Sr. Data Engineer" , "Is_Locked":false , "City":"Pune" , "Industry":"IT Services" , "Job_Opening_ID":"RRF_5698" , "Job_Description":" Who are we Fulcrum Digital is an agile and next-generation digital accelerating company providing digital transformation and technology services right from ideation to implementation. These services have applicability across a variety of industries, including banking & financial services, insurance, retail, higher education, food, health care, and manufacturing. TheRole Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms. Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions. Requirements Experience in the Big Data technologies (Hadoop, Spark, Nifi,Impala) 5+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. High proficiency in Scala/Java and Spark for applied large-scale data processing. Expertise with big data technologies, including Spark, Data Lake, and Hive. Solid understanding of batch and streaming data processing techniques. Proficient knowledge of the Data Lifecycle Management process, including data collection, access, use, storage, transfer, and deletion. Expert-level ability to write complex, optimized SQL queries across extensive data volumes. Experience on HDFS, Nifi, Kafka. Experience on Apache Ozone, Delta Tables, Databricks, Axon(Kafka), Spring Batch, Oracle DB Familiarity with Agile methodologies. Obsession for service observability, instrumentation, monitoring, and alerting. Knowledge or experience in architectural best practices for building data lakes. " , "Job_Type":"Permanent" , "Job_Opening_Name":"Sr. Data Engineer" , "State":"Maharashtra" , "Country":"India" , "Zip_Code":"411001" , "id":"613047000046604967" , "Publish":true , "Keep_on_Career_Site":false , "Date_Opened":"2025-08-12"}]

Posted 3 weeks ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

bengaluru

Work from Office

Function: The Data & Analytics team is responsible for integrating new data sources, creating data models, developing data dictionaries, and building machine learning models for Wholesale Bank. The primary objective is to design and deliver data products that assist squads at Wholesale Bank in achieving business outcomes and generating valuable business insights. Within this job family, we distinguish between Data Analysts and Data Scientists. Both roles work with data, write queries, collaborate with engineering teams to source relevant data, perform data munging (transforming data into a format suitable for analysis and interpretation), and extract meaningful insights from the data. Data Analysts typically work with relatively simple, structured SQL databases or other BI tools and packages. On the other hand, Data Scientists are expected to develop statistical models and be hands-on with machine learning and advanced programming, including Generative AI. Requirements: We are seeking a highly skilled Data Science, Machine Learning and Generative AI Specialist with 5+ years of relevant experience in Advanced Analytics, Statistical, ML model development, deep learning, and AI research. In this role, candidates will be responsible for leveraging data-driven insights and machine learning techniques to solve complex business problems, optimize processes, and drive innovation. The ideal candidate will be skilled in working with large datasets to identify opportunities for product and process optimization and using models to assess the effectiveness of various actions. They should have substantial experience in applying diverse data mining and analysis techniques, utilizing various data tools, developing and deploying models, creating and implementing algorithms, and conducting simulations. Generative AI exposure of advanced prompt engineering, chain of thought techniques, and AI agents to drive our cutting-edge will support candidacy. Qualifications: Bachelors, Masters or Ph.D in Engineering, Data Science, Mathematics, Statistics, or a related field. 5+ years of experience in Advance Analytics, Machine learning, Deep learning. Proficiency in programming languages such as Python, and familiarity with machine learning libraries (e.g., Numpy, Pandas, TensorFlow, Keras, PyTorch, Scikit-learn). Experience with generative models such as GANs (Generative Adversarial Networks), VAEs (Variational Autoencoders), and transformer-based models (e.g., GPT-3/4, BERT, DALL E). Understanding of model fine-tuning, transfer learning, and prompt engineering in the context of large language models (LLMs). Strong experience with data wrangling, cleaning, and transforming raw data into structured, usable formats. Hands-on experience in developing, training, and deploying machine learning models for various applications (e.g., predictive analytics, recommendation systems, anomaly detection). Experience with cloud platforms (AWS, GCP, Azure) for model deployment and scalability. Proficiency in data processing and manipulation techniques. Hands-on experience in building data applications using Streamlit or similar tools. Advanced knowledge in prompt engineering, chain of thought processes, and AI agents. Excellent problem-solving skills and the ability to work effectively in a collaborative environment. Strong communication skills to convey complex technical concepts to non-technical stakeholders. Good to Have: Experience in the [banking/financial services/industry-specific] sector. Familiarity with cloud-based machine learning platforms such as Azure, AWS, or GCP. Proven experience working with OpenAI or similar large language models (LLMs). Experience with deep learning, NLP, or computer vision. Experience with big data technologies (e.g., Hadoop, Spark) is a plus. Certifications in Data Science, Machine Learning, or AI. Key Responsibilities: Extract and analyze data from company databases to drive the optimization and enhancement of product development and marketing strategies. Analyze large datasets to uncover trends, patterns, and insights that can influence business decisions. Leverage predictive and AI/ML modeling techniques to enhance and optimize customer experience, boost revenue generation, improve ad targeting, and more. Design, implement, and optimize machine learning models for a wide range of applications such as predictive analytics, natural language processing, recommendation systems, and more. Stay up-to-date with the latest advancements in data science, machine learning, and artificial intelligence to bring innovative solutions to the team. Communicate complex findings and model results effectively to both technical and non-technical stakeholders. Implement advanced data augmentation, feature extraction, and data transformation techniques to optimize the training process. Deploy generative AI models into production environments, ensuring they are scalable, efficient, and reliable for real-time applications. Use cloud platforms (AWS, GCP, Azure) and containerization tools (e.g., Docker, Kubernetes) for model deployment and scaling. Create interactive data applications using Streamlit for various stakeholders. Conduct prompt engineering to optimize AI models performance and accuracy. Continuously monitor, evaluate, and refine models to ensure performance and accuracy. Conduct in-depth research on the latest advancements in generative AI techniques and apply them to real-world business problems.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies