Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 10.0 years
14 - 18 Lacs
bengaluru
Work from Office
This Position reports to: Head of Central Finance Your role and responsibilities: We are looking for an experienced and technically proficient Data Architect to lead the design, integration, and optimization of the technical solutions within the Central Finance (CFIN) landscape. The Data architect will be responsible for ensuring that data replication and technical activities are fully aligned with business needs, effectively integrated with other enterprise applications, and supported by automated solutions to enhance operational efficiency. This role involves close collaboration with various internal teams, including Finance, IS Architecture, and external vendors, to maintain and evolve the data architecture, ensuring it meets business requirements and is fully compliant with ABB's standards. This role is contributing to the Finance Services business Finance Process Data Systems division in Bangalore, India. You will be mainly accountable for: Solution Design & Validation: Review and validate the design of all Data & Technical related solutions within the CFIN framework, ensuring they are aligned with business goals and technical requirements. Ownership of Data Architecture: Define, document, and own the overall data architecture within the CFIN ecosystem, including technical components, modules, and integration with other applications. Data Replication and automation: Error resolution based through AIF monitoring, Clearing/reference info, reconciliation based on PC G/L CC, Map managed experience for maintain & troubleshooting map managed errors, Enhancement capabilities of replications to address complex business scenarios, SLT based filtering, Deep knowledge on migration front, Replication assistance in relation to MDG and Experienced in recognizing & providing solutions on Currency/Values mismatch for real time replicated data. Integration with other processes: Collaborate with other business streams (O2C, P2P, P2D, R2R, TAX, Treasury) to ensure data standards are maintained and design comprehensive data solutions that incorporate all work streams Maintain Solution Roadmap: Keep the target Data solution architecture up-to-date, documenting changes to the roadmap and their impact on the broader enterprise architecture. Collaboration with Stakeholders: Work closely with the CFIN solution team, IS architects, vendors, and business stakeholders (including Finance, Process, Data, and Systems Finance teams) to configure, maintain, and enhance the CFIN landscape, ensuring business continuity. Business Process Alignment: Collaborate with Data Global Process Owners (GPOs) and business teams to define and implement robust Data solutions that align with business requirements and global best practices. Automation & Innovation: Drive the regular implementation of automation solutions within the CFIN system to streamline Data processes, reduce manual effort, and improve efficiency. Requirements Validation: Support the validation of business and functional requirements alongside Process Owners, FPDS team, and Technical Leads, ensuring processes are allocated to the appropriate applications and technologies. Compliance & Standards: Ensure that all Data & technical solutions and work processes are compliant with ABBs internal standards, policies, and regulatory requirements. Continuous Improvement: Maintain and enhance domain. expertise in Data and related technologies, keeping abreast of industry trends and ABB standards to drive continuous improvement within the organization. Qualifications for the role: Education: Bachelors or masters degree in computer science, Finance, Information Systems, Business Administration, or a related field. Relevant certifications in FICO SAP ECC, SAP S/4HANA, SAP CFIN, or IT architecture. At least 7-10 years of experience in Data Architect, SAP Architect, or a similar role, with deep knowledge of Data processes and system integration. Advanced expertise in SAP Central Finance (CFIN), SAP S/4HANA, or other ERP systems. Proficient in data process automation tools and strategies. Extensive experience with data migration and replication between SAP systems. In-depth knowledge of SAP Business Technology Platform (BTP), FIORI, and other related applications. Strong understanding of real-time data replication and automation standards. Strong leadership and team management skills, with the ability to motivate and guide cross-functional teams. Excellent collaboration skills with the ability to coordinate between different stakeholders, including business leaders, technical teams, and external partners. A strong focus on continuous improvement and automation, with a passion for driving innovation within enterprise systems. Experience in managing relationships with external vendors and third-party service providers to ensure the delivery of high-quality solutions. Ability to adapt to a fast-paced, dynamic work environment and manage multiple priorities effectively.
Posted 3 weeks ago
6.0 - 10.0 years
20 - 30 Lacs
bengaluru
Remote
Minimum 6 years of experience in data engineering and analytics Hands-on experience with OCI Data Integration, Oracle GoldenGate, Data Flow, and Object Storage . Proficiency in Autonomous Database (ADW) . Expertise in Oracle REST Data Services (ORDS) for exposing APIs. Strong knowledge of Oracle SQL & PL/SQL (procedures, packages, query optimization & tuning). Understanding of data quality, lineage, access control, and encryption . Experience with big data tools (Spark, Hadoop, Oracle Big Data Service). Exposure to API development (GraphQL, microservices). Cloud knowledge beyond OCI ( AWS, Azure, GCP ). DevOps for data (Git, Terraform, Docker/Kubernetes). Familiarity with Oracle Data Catalog .
Posted 3 weeks ago
6.0 - 11.0 years
8 - 16 Lacs
coimbatore, bengaluru, mumbai (all areas)
Hybrid
Job Description Summary Lead a high-performing technical team in delivering survey data analytics deliverables and intelligent automation solutions. Initially focused on client engagement and operational excellence, this role evolves into spearheading backend development and process innovation across the market research lifecycle. Role & responsibilities Key Responsibilities Client & Stakeholder Engagement (Short-Term Focus) Act as the primary point of contact for key clients, translating research goals into technical deliverables. Ensure timely, accurate, and high-quality outputs aligned with client expectations and market research standards. Partner with research and project managers to ensure stakeholder feedback is embedded in deliverables. Team Leadership & Capability Development (Short-Term Focus) Guide and mentor a multidisciplinary team (SQL, R, Python, Tableau) in delivering data processing and reporting solutions. Lead sprint planning, resource allocation, and task optimization across concurrent client deliverables. Technical Strategy & Innovation (Growing Long-Term Focus) Architect automation and data products to accelerate turnaround time and boost data integrity. Conceptualize and build modular backend components using Python, APIs, microservices, and containerized frameworks. Preferred candidate profile 7 - 10 years of experience in Market Research tech stack. Strong leadership with a track record of delivering end-to-end client projects. domain knowledge in market research. understanding of surveys, data collection, data validations/processing/reporting Knowledge of SQL,python or R
Posted 3 weeks ago
0.0 - 2.0 years
2 - 4 Lacs
pune
Work from Office
You will be part of the IT department that manages the core data processing and analytics platform for the firm. You will work closely with experienced data engineers, analysts, and developers across the globe. The ideal candidate should have a basic understanding of data processing and analytics, with technical skills in Python, PySpark, and SQL. Any experience with cloud platforms (AWS, GCP, Azure) is a plus. Bachelors degree in Engineering, Computer Science, Data Science, Artificial Intelligence, or equivalent. 0-2 years of experience in data-related development using Python, PySpark, and SQL. Basic understanding of big data processing frameworks like Apache Spark and AI/ML concepts. Familiarity with data visualization libraries such as Matplotlib or Seaborn. Basic knowledge of version control systems like Git. Interest in AI and machine learning applications in finance. Eagerness to learn new technologies, including AI tools and frameworks, and adapt quickly. Good communication skills, including oral and written English. Self-motivated and able to work well in a team environment. Keywords Python, PySpark, SQL, data processing, analytics, AI, machine learning, cloud, AWS, Azure, GCP, Git.
Posted 3 weeks ago
1.0 - 2.0 years
3 - 4 Lacs
bengaluru
Work from Office
Key Responsibilities Design, develop, and maintain scalable and resilient data pipelines (batch, streaming) using modern orchestration and ETL/ELT frameworks. Build and optimize data lakes, data warehouses, and real-time data systems to support analytics, reporting, and AI workflows. Define and implement data modeling, schema evolution, governance, and quality standards across structured and unstructured sources. Partner with platform, product, and AI/ML team to ensure availability, traceability, and performance of data systems in production. Implement observability, logging, alerting, and lineage tools for data flows and data infrastructure health. Integrate with cloud-native and third-party APIs, databases, and message buses to ingest and transform diverse data sources. Drive adoption of best practices in security, cost-efficiency, testing, and CI/CD for data infrastructure. Required Skills & Qualifications Bachelor s with 1-2 years of experience or master s in computer science, data engineering, or a related field. Previous experience in data engineering or backend systems development. Designing and operating experience of data pipelines using tools such as Apache Airflow, dbt, Dagster, or Prefect. Proficiency in distributed data processing (e.g., Spark, Flink, Kafka Streams) and SQL engines (Presto, Trino, BigQuery, Snowflake). Deep understanding of data modeling, partitioning, columnar storage formats (Parquet, ORC), and schema design. Strong programming skills in Python, Java, or Scala and familiarity with containerization and cloud infrastructure (AWS/GCP/Azure). Preferable hands-on experience with data governance, access control, and sensitive data handling. Comfortable working in agile, MLOps-driven, cross-functional engineering teams. Stress resilient Team player Proficient English speaking
Posted 3 weeks ago
1.0 - 3.0 years
3 - 5 Lacs
bengaluru
Work from Office
We are seeking a highly skilled and experienced Senior Data Scientist to join our team. In this role, you will leverage advanced Natural Language Processing (NLP), Deep Learning, and traditional Machine Learning (ML) techniques to build and optimize our NLP-driven products and services. You will work closely with cross-functional teams to develop scalable, highperformance NLP models that solve complex language and data-related challenges. If you are passionate about advancing language understanding and have hands-on experience with the latest NLP frameworks, models, and techniques, we would love to speak with you! Responsibilities Model Development and Optimization: Design, build, and deploy NLP models, including transformer models (e.g., BERT, GPT, T5) and other SOTA architectures, as well as traditional machine learning algorithms (e.g., SVMs, Logistic Regression) for specific applications. Data Processing and Feature Engineering: Develop robust pipelines for text preprocessing, feature extraction, and data augmentation for structured and unstructured data. Model Fine-Tuning and Transfer Learning: Fine-tune large language models for specific applications, leveraging transfer learning techniques, domain adaptation, and a mix of deep learning and traditional ML models. Performance Optimization: Optimize model performance for scalability and latency, applying techniques such as quantization, ONNX formats etc. Research and Innovation: Stay updated with the latest research in NLP, Deep Learning, and Generative AI, applying innovative solutions and techniques (e.g., RAG applications, Prompt engineering, Self-supervised learning). Stakeholder Communication: Collaborate with stakeholdersto gather requirements, conduct due diligence, and communicate project updates effectively, ensuring alignment between technical solutions and business goals Evaluation and Testing: Establish metrics, benchmarks, and methodologies for model evaluation, including cross-validation, and error analysis, ensuring models meet accuracy, fairness, and reliability standards Deployment and Monitoring: Oversee the deployment of NLP models in production, ensuring seamless integration, model monitoring, and retraining processes. Requirements Minimum Qualifications Education: Bachelor s degree in computer science, or a related field. Experience: Minimum of 1 to 3 years of experience in NLP, Deep Learning, and ML, with a proven track record of developing and optimizing both LLM and traditional machine learning NLP models. Technical Skills Advanced NLP Techniques: Proficiency in transformer models (e.g., BERT, RoBERTa, GPT, T5), and experience with techniques like entity recognition (NER), text classification, summarization, question answering, and language generation. Programming: Strong programming skills in Python, with experience in NLP libraries such as Hugging Face Transformers, spaCy, NLTK, and Gensim. ML and DL Frameworks: Proficiency in ML and Deep Learning frameworks such as TensorFlow, PyTorch, and Scikit-learn Traditional ML Techniques: Familiarity with traditional ML models like SVMs, Logistic Regression, Decision Trees, and KNN etc Evaluation Metrics: Familiarity with NLP and ML evaluation metrics (e.g., BLEU, ROUGE, F1, accuracy, precision, recall) and experience designing experiments and tests. Soft Skills Communication Skills: Excellent verbal and written communication skills, capable of explaining technical concepts to both technical and non-technical stakeholders. Enthusiasm for Learning: A self-motivated individual with a passion for continuous learning and openness to exploring new technologies. Teamwork and Collaboration: A team player with the ability to work effectively in a collaborative, fast-paced environment EAIGG Draup is a Member of the Ethical AI Governance Group As an AI-first company, Draup has been a champion of ethical and responsible AI since day one. Our models adhere to the strictest data standards and are routinely audited for bias. Drive better decisions with unmatched, real-time data & agentic intelligence Thank you! Your submission has been received! Oops! Something went wrong while submitting the form. Cookies required to enable basic website functionality. Cookies used to deliver advertising that is more relevant to you and your interests. Cookies allowing the website to remember choices you make (such as your user name, language, or the region you are in). Cookies helping understand how this website performs, how visitors interact with the site, and whether there may be technical issues.
Posted 3 weeks ago
0.0 - 2.0 years
1 - 1 Lacs
guntur
Work from Office
Roles and Responsibilities Perform data entry operations with high accuracy and speed using MS Office tools such as Excel. Provide administrative support to the team by managing documents, spreadsheets, and databases. Utilize computer operating skills to efficiently process data and maintain organized records. Offer clerical assistance with back office tasks, including filing systems and document management. Demonstrate strong typing abilities with a minimum speed of 40 wpm. Desired Candidate Profile 0-2 years of experience in a similar role or industry (data processing/back office). B.Com degree from a recognized College. Proficiency in MS Office applications (Word, Excel) with excellent typing skills (minimum 40 wpm). Strong understanding of computer operating principles and basic knowledge of software packages.
Posted 3 weeks ago
6.0 - 9.0 years
4 - 8 Lacs
telangana
Work from Office
Senior Python Developer (6 Years Experience)LocationPune/BangaloreEmployment TypeFull-time/ContractExperience6 YearsRole Overview:We are seeking a Senior Python Developer with 6 years of experience, specializing in database technologies like DB2 or Snowflake. The ideal candidate will have a strong background in backend development, data processing, and performance optimization. Experience with Java is a plus.Key ResponsibilitiesDesign, develop, and maintain scalable and efficient Python applications. Work extensively with DB2 or Snowflake for data modeling, query optimization, and performance tuning. Develop and optimize SQL queries, stored procedures, and data pipelines. Collaborate with cross-functional teams to integrate backend services with frontend applications. Implement best practices for code quality, security, and performance. Write unit tests and participate in code reviews. Troubleshoot and resolve production issues Required Skills: 6 years of experience in Python development. Strong experience with DB2 or Snowflake (SQL tuning, stored procedures, ETL workflows). Hands-on experience with Python frameworks such as Flask, Django, or FastAPI. Proficiency in writing complex SQL queries and database optimization. Experience with cloud platforms (AWS, Azure, or GCP) and CI/CD pipelines. Familiarity with version control (Git) and Agile methodologies.Good to HaveExperience in Java for backend services or microservices. Knowledge of Kafka, RabbitMQ, or other messaging systems. Exposure to containerization tools like Docker and Kubernetes.
Posted 3 weeks ago
3.0 - 8.0 years
5 - 9 Lacs
gurugram
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Apache Spark Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. You will play a crucial role in developing innovative solutions to enhance business operations and efficiency.________________________________________ Roles & Responsibilities Expected to perform independently and become an SME Required active participation/contribution in team discussions Contribute in providing solutions to work-related problems Develop and implement software solutions to meet business requirements Collaborate with team members to design and optimize applications Troubleshoot and debug applications to ensure optimal performance Stay updated on industry trends and technologies to enhance application development processes Provide technical guidance and support to junior team members________________________________________ Professional & Technical SkillsMust-Have Skills Proficiency in Apache Spark Strong understanding of big data processing and analytics Experience with distributed computing frameworks Hands-on experience in developing scalable applications Knowledge of cloud computing platformsMust-Have Additional Skills PySpark Spark SQL / SQL AWS________________________________________ Additional Information The candidate should have a minimum of 3 years of experience in Apache Spark This position is based at our Gurugram office A 15 years full-time education is required Qualification 15 years full time education
Posted 3 weeks ago
0.0 - 1.0 years
0 Lacs
noida
Work from Office
Role & responsibilities : Back Office Associate - Financial Services , Graduation preferred Basic knowledge on Python & SQL would be preferred ** Please note - Candidates would be hired under NAPS (National Apprenticeship Promotion Scheme) or NATS(National Apprenticeship Training scheme)
Posted 3 weeks ago
2.0 - 5.0 years
7 - 11 Lacs
gurugram
Work from Office
Overview: Data is at the heart of our global financial network. In fact, the ability to consume, store, analyze and gain insight from data has become a key component of our competitive advantage. Our goal is to build and maintain a leading-edge data platform that provides highly available , consistent data of the highest quality for all users of the platform, including our customers, operations teams and data scientists. We focus on evolving our platform to deliver exponential scale to NCR Atleos , powering our future growth. Data & AI Engineers at NCR Atleos experience working at one of the largest and most recognized financial companies in the world, while being part of a software development team responsible for next generation technologies and solutions. Our engineers design and build large scale data storage, computation and distribution systems. They partner with data and AI experts to deliver high quality AI solutions and derived data to our consumers. We are looking for Data & AI Engineers who like to innovate and seek complex problems. We recognize that strength comes from diversity and will embrace your unique skills, curiosity, drive, and passion while giving you the opportunity to grow technically and as an individual. Engineers looking to work in the areas of orchestration, data modelling , data pipelines, APIs, storage, distribution, distributed computation, consumption and infrastructure are ideal candidates. Responsibilities As a Data Engineer, you will be joining a Data & AI team transforming our global financial network and improving the quality of our products and services we provide to our customers. and you will be responsible for designing, implementing, and maintaining data pipelines and systems to support the organization's data needs. Your role will involve collaborating with data scientists, analysts, and other stakeholders to ensure data accuracy, reliability, and accessibility. Key Responsibilities: Data Pipeline Development: Design, build, and maintain scalable and efficient data pipelines to collect, process, and store structured and unstructured data from various sources. Data Integration: Integrate data from multiple sources such as databases, APIs, flat files, and streaming platforms into centralized data repositories. Data Modeling: Develop and optimize data models and schemas to support analytical and operational requirements. Implement data transformation and aggregation processes as needed. Data Quality Assurance: Implement data validation and quality assurance processes to ensure the accuracy, completeness, and consistency of data throughout its lifecycle. Performance Optimization: Monitor and optimize data processing and storage systems for performance, reliability, and cost-effectiveness. Identify and resolve bottlenecks and inefficiencies in data pipelines and leverage Automation and AI to improve overall Operations. Infrastructure Management: Manage and configure cloud-based or on-premises infrastructure components such as databases, data warehouses, compute clusters, and data processing frameworks. Collaboration: Collaborate with cross-functional teams including data scientists, analysts, software engineers, and business stakeholders to understand data requirements and deliver solutions that meet business objectives . Documentation and Best Practices: Document data pipelines, systems architecture, and best practices for data engineering. Share knowledge and provide guidance to colleagues on data engineering principles and techniques. Continuous Improvement: Stay updated with the latest technologies, tools, and trends in data engineering and recommend improvements to existing processes and systems. Qualifications and Skills: Bachelor's degree or higher in Computer Science, Engineering, or a related field. Proven experience in data engineering or related roles, with a strong understanding of data processing concepts and technologies. Mastery of programming languages such as Python, Java, or Scala. Knowledge of database systems such as SQL, NoSQL, and data warehousing solutions. Knowledge of stream processing technologies such as Kafka or Apache Beam. Experience with distributed computing frameworks such as Apache Spark, Hadoop, or Apache Flink . Experience deploying pipelines in cloud platforms such as AWS, Azure, or Google Cloud Platform. Experience in implementing enterprise systems in production setting for AI, natural language processing. Exposure to self-supervised learning, transfer learning, and reinforcement learning is a plus . Have full stack experience to build the best fit solutions leveraging Large Language Models (LLMs) and Generative AI solutions with focus on privacy, security, fairness. Have good engineering skills to design the output from the AI with nodes and nested nodes in JSON or array, HTML formats for as-is consumption and display on the dashboards/portals. Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. Experience with containerization and orchestration tools such as Docker and Kubernetes. Familiarity with data visualization tools such as Tableau or Power BI.
Posted 3 weeks ago
3.0 - 5.0 years
7 - 11 Lacs
gurugram
Work from Office
3-5 years experience of industry experience in developing data science models and solutions. Able to quickly pick up new programming languages, technologies, and frameworks Strong understanding of data structures and algorithms Ability to work in a start-up environment with a do it yourself attitude. Skill and Expertise Expert level proficiency in programming language Python/SQL. Working knowledge of Relational SQL and NoSQL databases, including Postgres , Redshift Exposure to open source tools & working on cloud platforms like AWS and Azure and being able to use their tools like Athena, Sagemaker, machine learning libraries is an added advantage Exposure to big data processing technologies like Pyspark, SparkSQL added advantage Exposure to AI tools LLM models Llama (ChatGPT, Bard) and prompt engineering is added advantage Exposure to visualization tools like Tableau, PowerBI is an added advantage Primary Responsibility As a senior analyst the individual will be providing technical expertise to the team Should be expert in all phases of model development (EDA, Hypothesis, Feature creation, Dimension reduction, Data set clean-up, Training models, Model selection, Validation and Deployment) Is expected to participate and lead discussions during the solution design phase. Should have deep understanding of statistical & machine learning methods ((logistic regression, SVM, decision tree, random forest, neural network), Regression (linear regression, decision tree, random forest, neural network), Classical optimisation (gradient descent etc), Must have thorough mathematical knowledge of correlation/causation, classification, recommenders, probability, stochastic processes, NLP, and how to implement them to a business problem. Expected to gain business understanding in health care domain order to come up with relevant analytics use cases. (E.g. HEOR / RWE / Survival modelling) Familiarity with NLP, Sentiment analysis, text mining , data scraping solutions. Dont meet every job requirementThats okay! Our company is dedicated to building a diverse, inclusive, and authentic workplace. If youre excited about this role, but your experience doesnt perfectly fit every qualification, we encourage you to apply anyway. You may be just the right person for this role or others.
Posted 3 weeks ago
5.0 - 9.0 years
20 - 25 Lacs
gurugram
Work from Office
Experience in Databricks for building ETL pipelines. Experience in building data model and optimizing queries. Experience with data management and reporting processes. Experience with medallion architecture and data governance. Build, test, and maintain scalable ETL pipelines using Databricks and Apache Spark. Contribute to the development of data models and optimize queries for performance and efficiency. Participate in the design and implementation of data governance practices, ensuring data quality and consistency across pipelines. Assist in the implementation and maintenance of the medallion architecture (bronze, silver, gold layers) to streamline data processing and reporting. Collaborate with cross-functional teams to integrate data management and reporting processes into broader business initiatives. Troubleshoot and optimize Databricks workflows to enhance performance and reduce costs. Maintain clear documentation of ETL pipelines, data models, and governance procedures to ensure transparency and scalability.
Posted 3 weeks ago
1.0 - 5.0 years
8 - 9 Lacs
bengaluru
Work from Office
SAP GTS Expertise: In-depth knowledge and hands-on experience in configuring and managing SAP GTS modules for Compliance Management, Customs Management, Preference Management, and Intrastat Reporting. SAP GTS Edition for HANA: Strong technical expertise in the GTS edition for HANA, with a deep understanding of its features, performance enhancements, and integration capabilities. Customs & Trade Compliance Knowledge: Solid understanding of global trade regulations, customs requirements, export controls, and trade agreements (such as FTAs). Technical Skills: Experience with SAP HANA for performance optimization, reporting, and real-time data processing in the context of global trade services. System Integration: Strong experience in integrating SAP GTS with other SAP modules (e.g., SAP MM, SD, FICO) and external systems for seamless global trade operations. Problem-Solving & Analytical Skills: Ability to analyze complex business requirements, identify solutions, and implement system configurations to meet these needs. Project Management: Experience leading or supporting SAP GTS implementation projects, with a focus on cross-functional collaboration and efficient delivery. Communication Skills: Strong written and verbal communication skills to work effectively with both technical teams and business stakeholders. Preferred Qualifications: SAP Certification in Global Trade Services (GTS) or SAP HANA is highly desirable. Experience with cloud-based SAP solutions and integration with SAP GTS is an advantage. Location and way of working Base location: Bangalore This profile involves frequent travelling to client locations Work from office.
Posted 3 weeks ago
5.0 - 9.0 years
5 - 9 Lacs
bengaluru
Work from Office
Data Collection : Gathering data from various sources. ClickSense, Oracle,/SAP, SFDC, Veeva, Power Bi, Accolade, ACMS, Praos/PriceFX, mekko graphic Data Cleaning : Ensuring the data is accurate and free from errors. Price Correction: Regularly reviewing and correcting any pricing errors in your system to ensure accuracy, Check prices are consistent. Data Analysis : Using statistical tools and techniques to interpret data sets. Prepare data for Product Management to enable him to make management conclusions Sourcing and categorizing customer data and insights Reporting : Creating visualizations and reports to present findings. DBS formats (DBS bowler, PSP A3 sheets) Portfolio reports, Funnel reports, Revenue Reports, Margin Reports, Market segmentation report Insights and Recommendations : Providing actionable insights to help guide business decisions. Data processing : Part number request work, Order quantity setting, Support Who you are: Bachelors Degree: Statistics, mathematics, computer science, information technology, or a related field. Data Visualization Tools: Experience with tools like Tableau, Power BI, or similar platforms (ClickSense, Oracle,/SAP, SFDC, Veeva, Power Bi, Accolade, ACMS, Praos/PriceFX, mekko graphic). Statistical Analysis: Understanding of statistical methods and tools. Database Management: Knowledge of database systems and data warehousing. Relevant Work Experience: Prior experience in data analysis, business intelligence, or a related field. Experience managing data projects and working with cross-functional teams. It would be a plus if you also possess: Professional Certifications: Certifications such as Certified Analytics Professional (CAP), Microsoft Certified: Data Analyst Associate, or similar can be beneficial.
Posted 3 weeks ago
5.0 - 9.0 years
5 - 9 Lacs
bengaluru
Work from Office
Data Collection : Gathering data from various sources. ClickSense, Oracle,/SAP, SFDC, Veeva, Power Bi, Accolade, ACMS, Praos/PriceFX, mekko graphic Data Cleaning : Ensuring the data is accurate and free from errors. Price Correction: Regularly reviewing and correcting any pricing errors in your system to ensure accuracy, Check prices are consistent. Data Analysis : Using statistical tools and techniques to interpret data sets. Prepare data for Product Management to enable him to make management conclusions Sourcing and categorizing customer data and insights Reporting : Creating visualizations and reports to present findings. DBS formats (DBS bowler, PSP A3 sheets) Portfolio reports, Funnel reports, Revenue Reports, Margin Reports, Market segmentation report Insights and Recommendations : Providing actionable insights to help guide business decisions. Data processing : Part number request work, Order quantity setting, Support Who you are: Bachelors Degree: Statistics, mathematics, computer science, information technology, or a related field. Data Visualization Tools: Experience with tools like Tableau, Power BI, or similar platforms (ClickSense, Oracle,/SAP, SFDC, Veeva, Power Bi, Accolade, ACMS, Praos/PriceFX, mekko graphic). Statistical Analysis: Understanding of statistical methods and tools. Database Management: Knowledge of database systems and data warehousing. Relevant Work Experience: Prior experience in data analysis, business intelligence, or a related field. Experience managing data projects and working with cross-functional teams. It would be a plus if you also possess: Professional Certifications: Certifications such as Certified Analytics Professional (CAP), Microsoft Certified: Data Analyst Associate, or similar can be beneficial.
Posted 3 weeks ago
2.0 - 5.0 years
16 - 18 Lacs
mumbai
Work from Office
Key ResponsibilitiesUnderstand business requirements by engaging with business teams Data extraction from valuable data sources & automating data collection process Data processing, cleaning and validating integrity of data to be used for analysis Exploratory data analysis to identify trends and patterns in large amount of data Build machine learning based models using algorithms and statistical techniques like Regression, Decision trees, Boosting etc Present insights using data visualization techniques Propose solutions and strategies to various complex business challengesBuild GenAI models using RAG frameworks for chatbots, summarisation etc Develop model deployement pipline using Lambda, ECS etcSkills & AttributesKnowledge of statistical programming languages like R, Python and database query languages like SQL and statistical tests like distributions, regression etc Experience in data visualization tools like tableau, QlikSense etc Ability to write comprehensive reports, with an analytical mind and inclination for problem-solving Exposure in advanced techniques like GenAI, neural networks, NLP, image and speech processing Ability to engage with stakeholders to understand business requirements and convert the same into technical problems for solution development and deployment
Posted 3 weeks ago
0.0 - 1.0 years
8 - 10 Lacs
hyderabad
Work from Office
Google Cloud Platform o GCS, DataProc, Big Query, Data Flow Programming Languages o Java, Scripting Languages like Python, Shell Script, SQL Google Cloud Platform o GCS, DataProc, Big Query, Data Flow 5+ years of experience in IT application delivery with proven experience in agile development methodologies 1 to 2 years of experience in Google Cloud Platform (GCS, DataProc, Big Query, Composer, Data Processing like Data Flow)
Posted 3 weeks ago
4.0 - 6.0 years
10 - 14 Lacs
bengaluru
Work from Office
Google Cloud Platform o GCS, DataProc, Big Query, Data Flow Programming Languages o Java, Scripting Languages like Python, Shell Script, SQL Google Cloud Platform o GCS, DataProc, Big Query, Data Flow 5+ years of experience in IT application delivery with proven experience in agile development methodologies 1 to 2 years of experience in Google Cloud Platform (GCS, DataProc, Big Query, Composer, Data Processing like Data Flow)
Posted 3 weeks ago
5.0 - 7.0 years
11 - 15 Lacs
coimbatore
Work from Office
About the job : NP : Imm-15 days Rounds : 3 Rounds (Virtual) Mandate Skills : Apache spark, hive, Hadoop, spark, scala, Databricks The Role : - Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. - Constructing infrastructure for efficient ETL processes from various sources and storage systems. - Leading the implementation of algorithms and prototypes to transform raw data into useful information. - Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. - Creating innovative data validation methods and data analysis tools. - Ensuring compliance with data governance and security policies. - Interpreting data trends and patterns to establish operational alerts. - Developing analytical tools, programs, and reporting mechanisms - Conducting complex data analysis and presenting results effectively. - Preparing data for prescriptive and predictive modeling. - Continuously exploring opportunities to enhance data quality and reliability. - Applying strong programming and problem-solving skills to develop scalable solutions. Requirements : - Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala) - 5+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. - High proficiency in Scala/Java and Spark for applied large-scale data processing. - Expertise with big data technologies, including Spark, Data Lake, and Hive.
Posted 3 weeks ago
5.0 - 10.0 years
30 - 45 Lacs
navi mumbai, gurugram
Hybrid
Position : Senior Quantitative Engineer - Systematic Strategies Location: Navi Mumbai or Delhi Shift: UK The Role: As a Senior Quantitative Engineer, you will design, implement, and maintain the data and systems architecture that powers Morningstars Systematic Strategies portfolios.For this you are expected to work closely with portfolio managers, researchers, and developers to help build, scale, and optimize the infrastructure that powers quantitative research and investment management. You should understand the nuances of the data and prepare it for ingestion, and your daily work with researchers and portfolio managers will facilitate the research, design, and deployment of quantitative strategies. This role is ideal for someone who thrives at the intersection of data engineering, cloud architecture, and financial systems. Data and Research are the key pillars for the role that require strong technical skills and comfort with cloud technology. Responsibilities : Design and maintain scalable data pipelines for financial and alternative datasets using PySpark and AWS Architect and manage cloud infrastructure (AWS EMR, S3, Glue, Lambda, ECS) to support quant research and production Collaborate with quantitative researchers and portfolio managers to operationalize data workflows and support model deployment Develop and support interactive dashboards and internal tools to visualize pipeline health, system performance, and data quality metrics for quant teams Build and maintain tools and platforms for backtesting, simulation, and analytics. Optimize systems for performance, reliability, and cost-efficiency across compute and storage resources Implement data governance, quality, and lineage processes for high-integrity research environments Automate operational workflows and ensure CI/CD for quant systems Requirements : Bachelor’s or Masters’ degree in a quantitative, financial discipline, or engineering. 5+ years of experience in a Quant Engineer / Data Engineer / Platform Engineer role in an investment data handling team Expertise in Python, with strong knowledge of PySpark and distributed data processing Hands-on experience with AWS services (EMR, S3, Glue, Lambda, ECS, CloudWatch) Familiarity with market data feeds and financial datasets like FactSet, Bloomberg, Compustat Strong understanding of data architecture, storage formats (Parquet, Delta), and Spark performance tuning Proficiency in developing interactive dashboards using tools like Tableau/PowerBI/Quicksight to monitor Portfolio performance Experience with orchestration tools like Step function/Airflow and containerization using Docker Comfortable working in Agile teams with Git, CI/CD, and infrastructure-as-code (Terraform, CloudFormation) Knowledge of SQL and distributed query engines (e.g., Athena) Exposure with Axioma or equivalent portfolio risk and optimization platforms (e.g., MSCI Barra, Bloomberg PORT, PyPortfolioOpt) to support risk modeling, portfolio construction, and performance attribution workflows Knowledge in the domains of Agile Methodology, Machine Learning, and Optimization is a plus Good to have Skills: Strong understanding of Financial Reports like Returns/Factor Attribution/Risk metrices Exposure to financial markets or quant research environments. Familiarity with cost optimization strategies in cloud environments. Experience with real-time data ingestion frameworks (e.g., Kafka, Kinesis) Morningstar is an equal opportunity employer.
Posted 3 weeks ago
2.0 - 7.0 years
3 - 6 Lacs
mohali
Work from Office
You will be a part of a 20-member team. You are required to build and sustain a strong & reliable relationship with the clients with your proactive communication and close coordination with other teams. Responsibilities and Duties: Tele calling and Live data entry of container in & out in the software Insert containers / Customers data by inputting text based and numerical information from source documents within time limits Updation in various softwares Compile, verify accuracy and sort information according to priorities to prepare source data for computer entry Review data for deficiencies or errors, correct any incompatibilities if possible and check output Maintain accurate records of all export-related activities, including documentation, shipments, and bill payments. Prepare and process export documents such as invoices, bills of lading, certificates of origin, and shipping instructions. Qualifications and Skills: Graduate and above with min. 2 year of experience may apply. Fluency in English. No Mother Tongue Influence. Proficient with MS- Excel, Outlook. Should have typing speed of 30 words/minute. Age between 24 to 40 years Should be comfortable working in Night & Rotational shift
Posted 3 weeks ago
0.0 years
3 - 4 Lacs
gurugram
Work from Office
We are looking for a motivated and enthusiastic Trainee Data Engineer to join our Engineering team. This is an excellent opportunity for recent graduates to start their career in data engineering, work with modern technologies, and learn from experienced professionals. The candidate should be eager to learn, curious about data, and willing to contribute to building scalable and reliable data systems. Responsibilities: Understand and align with the values and vision of the organization. Adhere to all company policies and procedures. Support in developing and maintaining data pipelines under supervision. Assist in handling data ingestion, processing, and storage tasks. Learn and contribute to database management and basic data modeling. Collaborate with team members to understand project requirements. Document assigned tasks, processes, and workflows. Stay proactive in learning new tools, technologies, and best practices in data engineering. Required Candidate profile: Bachelor's degree in Computer Science, Information Technology, or related field. Fresh graduates or candidates with up to 1 year of experience are eligible. Apply Link - https://leewayhertz.zohorecruit.in/jobs/Careers/32567000019403095/Trainee-Data-Engineer?source=CareerSite LeewayHertz is an equal opportunity employer and does not discriminate based on race, color, religion, sex, age, disability, national origin, sexual orientation, gender identity, or any other protected status. We encourage a diverse range of applicants.
Posted 3 weeks ago
0.0 - 2.0 years
1 - 2 Lacs
mumbai, mumbai suburban
Work from Office
Scanzer Outsourcing is looking for DATA ENTRY OPERATORS & COMPUTER OPERATOR to join our dynamic team and embark on a rewarding career journey Input and update data into computer systems.
Posted 3 weeks ago
7.0 - 12.0 years
8 - 13 Lacs
bengaluru
Work from Office
Your future role Take on a new challenge and apply your data engineering expertise in a cutting-edge field. Youll work alongside collaborative and innovative teammates. You'll play a key role in enabling data-driven decision-making across the organization by ensuring data availability, quality, and accessibility. Day-to-day, youll work closely with teams across the business (e.g., Data Scientists, Analysts, and ML Engineers), mentor junior engineers, and contribute to the architecture and design of our data platforms and solutions. Youll specifically take care of designing and developing scalable data pipelines, but also managing and optimizing object storage systems. Well look to you for: Designing, developing, and maintaining scalable and efficient data pipelines using tools like Apache NiFi and Apache Airflow. Creating robust Python scripts for data ingestion, transformation, and validation. Managing and optimizing object storage systems such as Amazon S3, Azure Blob, or Google Cloud Storage. Collaborating with Data Scientists and Analysts to understand data requirements and deliver production-ready datasets. Implementing data quality checks, monitoring, and alerting mechanisms. Ensuring data security, governance, and compliance with industry standards. Mentoring junior engineers and promoting best practices in data engineering. All about you We value passion and attitude over experience. Thats why we dont expect you to have every single skill. Instead, weve listed some that we think will help you succeed and grow in this role: Bachelors or Masters degree in Computer Science, Engineering, or a related field. 7+ years of experience in data engineering or a similar role. Strong proficiency in Python and data processing libraries (e.g., Pandas, PySpark). Hands-on experience with Apache NiFi for data flow automation. Deep understanding of object storage systems and cloud data architectures. Proficiency in SQL and experience with both relational and NoSQL databases. Familiarity with cloud platforms (AWS, Azure, or GCP). Exposure to the Data Science ecosystem, including tools like Jupyter, scikit-learn, TensorFlow, or MLflow. Experience working in cross-functional teams with Data Scientists and ML Engineers. Cloud certifications or relevant technical certifications are a plus.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |