Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 5.0 years
14 - 17 Lacs
Mumbai
Work from Office
As a Data Engineer at IBM, you' pay a vita roe in the deveopment, design of appication, provide reguar support/guidance to project teams on compex coding, issue resoution and execution. Your primary responsibiities incude: Lead the design and construction of new soutions using the atest technoogies, aways ooking to add business vaue and meet user requirements. Strive for continuous improvements by testing the buid soution and working under an agie framework. Discover and impement the atest technoogies trends to maximize and buid creative soutions Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Experience with Apache Spark (PySpark)In-depth knowedge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnoogiesFamiiarity with Hadoop, HDFS, Kafka, and other big data toos. Data Engineering Skis: Strong understanding of ETL pipeines, data modeing, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipuation. Data Processing FrameworksKnowedge of data processing ibraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for arge-scae data anaysis and transformation. Coud PatformsExperience working with coud patforms ike AWS, Azure, or GCP, incuding using coud storage systems Preferred technica and professiona experience Define, drive, and impement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technoogy teams incuding appication deveopment, enterprise architecture, testing services, network engineering, Good to have detection and prevention toos for Company products and Patform and customer-facing
Posted 4 days ago
2.0 - 5.0 years
14 - 17 Lacs
Hyderabad
Work from Office
As a Data Engineer at IBM, you' pay a vita roe in the deveopment, design of appication, provide reguar support/guidance to project teams on compex coding, issue resoution and execution. Your primary responsibiities incude: Lead the design and construction of new soutions using the atest technoogies, aways ooking to add business vaue and meet user requirements. Strive for continuous improvements by testing the buid soution and working under an agie framework. Discover and impement the atest technoogies trends to maximize and buid creative soutions Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Experience with Apache Spark (PySpark)In-depth knowedge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnoogiesFamiiarity with Hadoop, HDFS, Kafka, and other big data toos. Data Engineering Skis: Strong understanding of ETL pipeines, data modeing, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipuation. Data Processing FrameworksKnowedge of data processing ibraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for arge-scae data anaysis and transformation. Coud PatformsExperience working with coud patforms ike AWS, Azure, or GCP, incuding using coud storage systems Preferred technica and professiona experience Define, drive, and impement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technoogy teams incuding appication deveopment, enterprise architecture, testing services, network engineering, Good to have detection and prevention toos for Company products and Patform and customer-facing
Posted 4 days ago
5.0 - 10.0 years
12 - 17 Lacs
Noida
Work from Office
Spark/PySpark Technical hands on data processing Table designing knowledge using Hive - similar to RDBMS knowledge Database SQL knowledge for retrieval of data - transformation queries such as joins (full, left, right), ranking, group by Good Communication skills. Additional skills - GitHub, Jenkins, shell scripting would be added advantage Mandatory Competencies Big Data - PySpark Big Data - SPARK Big Data - Hadoop Big Data - Hive Database - SQL DevOps - Github DevOps - Jenkins DevOps - Shell Scripting Beh - Communication and collaboration At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, were committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees success and happiness.
Posted 4 days ago
3.0 - 8.0 years
4 - 9 Lacs
Mumbai Suburban
Work from Office
Job Title: Data Processing (DP) Executive Location: MIDC, Andheri East, Mumbai Work Mode: Work From Office (WFO) Work Days: Monday to Friday Work Hours: 9:00 PM 6:00 AM IST (Night Shift) Job Summary: We are seeking a highly skilled and detail-oriented Data Processing (DP) Executive to join our team. The ideal candidate will have a solid background in data analysis and processing, strong proficiency in industry-standard tools, and the ability to manage large data sets efficiently. This role is critical in ensuring data integrity and delivering accurate insights for business decision-making. Key Responsibilities: Manage and process data using tools like SPSS and Q programming . Perform data cleaning, transformation, and statistical analysis. Collaborate with research and analytics teams to interpret and format data for reporting. Create reports and dashboards; experience with Tableau or similar visualization tools is an advantage. Utilize SQL for data querying and validation. Ensure accuracy and consistency of data deliverables across projects. Handle multiple projects simultaneously with a keen eye for detail and timelines. Technical Skills: Proficiency in SPSS and Q programming . Strong understanding of data processing techniques and statistical methods. Familiarity with Tableau or other data visualization tools (preferred). Basic working knowledge of SQL . Educational Qualifications: Bachelor's degree in Statistics, Computer Science, Data Science , or a related field. Experience: Minimum 3 years of experience in data processing or a similar analytical role. Soft Skills: Excellent analytical and problem-solving abilities. Strong attention to detail and accuracy. Good communication skills and the ability to work in a team-oriented environment. Self-motivated with the ability to work independently and manage multiple tasks effectively.
Posted 4 days ago
2.0 - 3.0 years
2 - 2 Lacs
Chennai
Work from Office
MVH Required Data entry operator Handling large data files research data ,Data exclamation and data collection data entry and cleaning the data Red cap and X-Excel sheet. Candidates can send cv to hr@mvdiabetes.in or call 6381040749
Posted 4 days ago
2.0 - 4.0 years
2 - 6 Lacs
Gurugram
Work from Office
As a key member of the DTS team, you will primarily collaborate closely with a global leading hedge fund on data engagements. Partner with data strategy and sourcing team on data requirements to design data pipelines and delivery structures. Desired Skills and Experience Essential skills B.Tech/ M.Tech/ MCA with 2-4 years of overall experience. Skilled in Python and SQL. Experience with data modeling, data warehousing, and building data pipelines. Experience working with FTP, API, S3 and other distribution channels to source data. Experience working with financial and/or alternative data products. Experience working with cloud native tools for data processing and distribution. Experience with Snowflake and Airflow. Key Responsibilities Engage with vendors and technical teams to systematically ingest, evaluate, and create valuable data assets. Collaborate with core engineering team to create central capabilities to process, manage and distribute data assts at scale. Apply robust data quality rules to systemically qualify data deliveries and guarantee the integrity of financial datasets. Engage with technical and non-technical clients as SME on data asset offerings. Key Metrics Python, SQL. Snowflake Data Engineering and pipelines Behavioral Competencies Good communication (verbal and written) Experience in managing client stakeholders
Posted 4 days ago
4.0 - 6.0 years
4 - 8 Lacs
Gurugram
Work from Office
Supporting client in Financial Planning and Analysis activities (FPA) including collecting revenue, headcount and cost submissions Support and actively participate in forecast and budgeting functions, data processing, review and build-up of revenue, headcount and cost excel spreadsheets Prepare and manage different reporting activities related to relevant business areas and KPIs Responsible for supporting the onshore team in preparing relevant projections on key areas and KPIs Assist in the preparation of presentations to track and analyze the performance of key areas of the business, assist in improving existing templates and flagging and documenting any lags in information provided and share suggestions Perform variance analysis (actuals vs. estimates) to determine the deviations from projected metrics and help identify areas for improvement Support on ad-hoc analysis and projects as per Client requests Contribute toward managing project timelines and quality of deliverables in a manner to ensure high client satisfaction Demonstrate strength and flair in client/requester relationship building and management, information/knowledge needs assessment Key Competencies: CA/MBA/CFA 4+ years of experience in FPA domain The candidate should have the ability to work as part of the team and independently as per the requirement Excellent written and verbal communication skills Good knowledge of accounting principles, budgeting and forecasting MS Office skills should be good in MS PowerPoint, MS Excel, and MS Word
Posted 4 days ago
0.0 - 3.0 years
2 - 3 Lacs
Noida
Work from Office
EXL IS HIRING FOR BACK-OFFICE (CONTRACTUAL ROLE) PROCESS About EXL EXL Service is a global analytics and digital solutions company serving industries including insurance, healthcare, banking and financial services, media, retail, and others. The company is headquartered in New York and has more than 37,000 professionals in locations throughout the United States, Europe, Asia, Latin America, Australia and South Africa. http://www.exlservice.com ELIGIBILITY - Candidate should be a graduate (any stream). - Fresher and experience both can apply. - Candidate should be comfortable with Night shifts. - Candidates should be comfortable with Work from Office (sec- 144 NOIDA). - Notice Period - Immediate joiners preferred -B.Tech Graduates/Diploma Graduates will not be entertained Please Note-It will be a contractual period of 06 months PERKS AND BENEFITS - Salary -Freshers-2.50 LPA and experience to be offered 3.00 LPA (depending upon last drawn and experience) - 5 days working - Both Sides transport till further update (within the hiring grid) NOTE- Do not carry any electronic items like Laptop and Pen drive MANDATORY DOCUMENTS - Please carry hardcopy of Resume(02 Copies) AADHAR card, Photocopy of PAN Card and 2 recent passport Size photograph along with you. Entry would not be allowed into the premises without the above-mentioned documents. Please come b/w 11 am- 1:30 PM as entry is not allowed post 2:00 PM Regards, EXL RECRUITMENT TEAM EXL: Empowering Businesses Through Data & AI EXL is a global leader in analytics, AI, and digital solutions for all industries. Let us power your growth with generative AI and digital transformation!
Posted 4 days ago
0.0 - 4.0 years
0 - 1 Lacs
Vadodara
Work from Office
Responsibilities: * Maintain confidentiality at all times * Input data accurately using computer software * Process documents with precision * Meet deadlines consistently * Manage back office tasks efficiently
Posted 4 days ago
0.0 - 1.0 years
3 - 6 Lacs
Hyderabad
Work from Office
Job Summary : As a Clinical reporting analyst you will be integral to our mission of providing accurate and timely analysis of ECG data, contributing to the improvement of patient care and outcomes. looks forward to your contributions to our team and the impact you will make in enhancing our data processing capabilities. Join us in embracing the startup vibe of agility, open communication, and teamwork. Here, you'll thrive in an environment where learning, challenging the status quo, and unleashing your creativity are encouraged. Your voice matters, and together, we move swiftly, learn from missteps, and make meaningful impacts. Let's forge ahead, innovate, and make a difference. Come be a part of our dynamic team! Job Responsibilities: Every candidate goes through a 6 week training program. The training covers ECG Analysis training, data processing techniques and software training. Once the training completes, your primary duties will include: Sanitise and process up Beat data as per the standard process. Prepare up Beat data with appropriate highlights for further processing. Effectively communicating ECG abnormalities by notifying lead technicians and/or physicians and clinical staff as necessary. Maintaining compliance with job-specific proficiency requirement. Your specific responsibilities may change from time to time at the discretion of t Company. You will also be expected to comply with all rules, policies, and procedures of the Company, as they may be adopted and modified from time to time. Candidate Requirements: 12th grade + Diploma in cardiology or Bachelors Degree in Zoology, lifesciences. Experience as a Holter Scanner or telemetry / monitor technician will be an added advantage . Proficiency level in handling computers. Excellent attention to detail . Positive attitude and team player, ability to use critical thinking skills . Knowledge of medical terminology, specific to Cardiology and Electrophysiology. Excellent written and verbal communication skills . Strong analytical, communication, and interpersonal skills.
Posted 4 days ago
12.0 - 15.0 years
12 - 15 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
AWS experience (not Azure or GCP), with 12-15 years of experience, and hands-on expertise in design and implementation. Design and Develop Data Solutions, Design and implement efficient data processing pipelines using AWS services like AWS Glue, AWS Lambda, Amazon S3, and Amazon Redshift. Candidates should possess exceptional communication skills to engage effectively with US clients. The ideal candidate must be hands-on with significant practical experience. Availability to work overlapping US hours is essential. The contract duration is 6 months. For this role, we're looking for candidates with 12 to 15 years of experience. AWS experience communication skills
Posted 4 days ago
2.0 - 6.0 years
7 - 11 Lacs
Bengaluru
Work from Office
About The Role This is an Internal document. Job TitleSenior Data Engineer About The Role As a Senior Data Engineer, you will play a key role in designing and implementing data solutions @Kotak811. — You will be responsible for leading data engineering projects, mentoring junior team members, and collaborating with cross-functional teams to deliver high-quality and scalable data infrastructure. — Your expertise in data architecture, performance optimization, and data integration will be instrumental in driving the success of our data initiatives. Responsibilities 1. Data Architecture and Designa. Design and develop scalable, high-performance data architecture and data models. b. Collaborate with data scientists, architects, and business stakeholders to understand data requirements and design optimal data solutions. c. Evaluate and select appropriate technologies, tools, and frameworks for data engineering projects. d. Define and enforce data engineering best practices, standards, and guidelines. 2. Data Pipeline Development & Maintenancea. Develop and maintain robust and scalable data pipelines for data ingestion, transformation, and loading for real-time and batch-use-cases b. Implement ETL processes to integrate data from various sources into data storage systems. c. Optimise data pipelines for performance, scalability, and reliability. i. Identify and resolve performance bottlenecks in data pipelines and analytical systems. ii. Monitor and analyse system performance metrics, identifying areas for improvement and implementing solutions. iii. Optimise database performance, including query tuning, indexing, and partitioning strategies. d. Implement real-time and batch data processing solutions. 3. Data Quality and Governancea. Implement data quality frameworks and processes to ensure high data integrity and consistency. b. Design and enforce data management policies and standards. c. Develop and maintain documentation, data dictionaries, and metadata repositories. d. Conduct data profiling and analysis to identify data quality issues and implement remediation strategies. 4. ML Models Deployment & Management (is a plus) This is an Internal document. a. Responsible for designing, developing, and maintaining the infrastructure and processes necessary for deploying and managing machine learning models in production environments b. Implement model deployment strategies, including containerization and orchestration using tools like Docker and Kubernetes. c. Optimise model performance and latency for real-time inference in consumer applications. d. Collaborate with DevOps teams to implement continuous integration and continuous deployment (CI/CD) processes for model deployment. e. Monitor and troubleshoot deployed models, proactively identifying and resolving performance or data-related issues. f. Implement monitoring and logging solutions to track model performance, data drift, and system health. 5. Team Leadership and Mentorshipa. Lead data engineering projects, providing technical guidance and expertise to team members. i. Conduct code reviews and ensure adherence to coding standards and best practices. b. Mentor and coach junior data engineers, fostering their professional growth and development. c. Collaborate with cross-functional teams, including data scientists, software engineers, and business analysts, to drive successful project outcomes. d. Stay abreast of emerging technologies, trends, and best practices in data engineering and share knowledge within the team. i. Participate in the evaluation and selection of data engineering tools and technologies. Qualifications1. 3-5 years"™ experience with Bachelor's Degree in Computer Science, Engineering, Technology or related field required 2. Good understanding of streaming technologies like Kafka, Spark Streaming. 3. Experience with Enterprise Business Intelligence Platform/Data platform sizing, tuning, optimization and system landscape integration in large-scale, enterprise deployments. 4. Proficiency in one of the programming language preferably Java, Scala or Python 5. Good knowledge of Agile, SDLC/CICD practices and tools 6. Must have proven experience with Hadoop, Mapreduce, Hive, Spark, Scala programming. Must have in-depth knowledge of performance tuning/optimizing data processing jobs, debugging time consuming jobs. 7. Proven experience in development of conceptual, logical, and physical data models for Hadoop, relational, EDW (enterprise data warehouse) and OLAP database solutions. 8. Good understanding of distributed systems 9. Experience working extensively in multi-petabyte DW environment 10. Experience in engineering large-scale systems in a product environment
Posted 5 days ago
5.0 - 8.0 years
12 - 22 Lacs
Pune, Maharashtra
Hybrid
Job Role You'll be responsible for Hands on experience on Data warehousing tools and methodologies. Design and manage scalable infrastructure on Google Cloud Platform (GCP) to support various application and data workloads. Implement and manage IAM policies, roles, and permissions to ensure secure access across GCP services. Build and optimize workflows using Cloud Composer (Airflow) and manage data processing pipelines via Dataproc. Provision and maintain Compute Engine VMs and integrate them into broader system architectures. Set up and query data in BigQuery, and manage data flows securely and efficiently. Develop and maintain CI/CD pipelines using Argo CD, Jenkins, or GitOps methodologies. Administer Kubernetes clusters (GKE) including node scaling, workload deployments, and Helm chart management. Create and maintain YAML files for defining infrastructure as a code. Monitor system health and performance using tools like Prometheus, Grafana, and GCPs native monitoring stack. Troubleshoot infrastructure issues, perform root cause analysis, and implement preventative measures. Collaborate with development teams to integrate infrastructure best practices and support application delivery. Document infrastructure standards, deployment processes, and operational procedures. Participate in Agile ceremonies, contributing to sprint planning, daily stand-ups, and retrospectives
Posted 5 days ago
5.0 - 9.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Educational Requirements MCA,MSc,Bachelor of Engineering,BBA,BCom Service Line Data & Analytics Unit Responsibilities 1. 5-8 yrs exp in Azure (Hands on experience in Azure Data bricks and Azure Data Factory) 2. Good knowledge in SQL, PySpark. 3. Should have knowledge in Medallion architecture pattern 4. Knowledge on Integration Runtime 5. Knowledge on different ways of scheduling jobs via ADF (Event/Schedule etc) 6. Should have knowledge of AAS, Cubes. 7. To create, manage and optimize the Cube processing. 8. Good Communication Skills. 9. Experience in leading a team Additional Responsibilities: Good knowledge on software configuration management systems Strong business acumen, strategy and cross-industry thought leadership Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Two or three industry domain knowledge Understanding of the financial processes for various types of projects and the various pricing models available Client Interfacing skills Knowledge of SDLC and agile methodologies Project and Team management Preferred Skills: Technology->Big Data - Data Processing->Spark
Posted 5 days ago
5.0 - 10.0 years
5 - 9 Lacs
Bengaluru
Work from Office
About the Role Were seeking an experienced Infrastructure Engineer to join our platform team, handling massive-scale data processing and analytics infrastructure that supports over 5B+ events and more 5M+ DAU .We re looking for someone who can help us scale gracefully while optimizing for performance, cost, and resiliency. Key Responsibilities Design, implement, and manage our AWS infrastructure, with a strong emphasis on automation, resiliency, and cost-efficiency. Develop and oversee scalable data pipelines (for event processing, transformation, and delivery). Implement and manage stream processing frameworks (such as Kinesis, Kafka, or MSK). Handle orchestration and ETL workloads, employing services like AWS Glue, Athena, Databricket, Redshift, or Apache Airflow. Implement robust network, storage, and backup strategies for growing workloads. Monitor, debug, and resolve production issues related to data and infrastructure in real time. Implement IAM controls, logging, alerts, and Security Best Practices across all components. Provide deployment automation (Docker, Terraform, CloudFormation) and collaborate with application engineers to enable smooth delivery. Build SOP for support and setup a functioning 24*7 support system (including hiring right engineers) to ensure system uptime and availability Required Technical Skills 5+ years of experience with AWS services (VPC, EC2, S3, Security Groups, RDS, Kinesis, MSK, Redshift, Glue). Experience designing and managing large-scale data pipelines with high-throughput workloads. Ability to handle 5 billion events/day and 1M+ concurrent users workloads gracefully. Familiar with scripting (Python, Terraform) and automation practices (Infrastructure as Code). Familiar with network fundamentals, Linux, scaling strategies, and backup routines. Collaborative team player able to work with engineers, data analysts, and stakeholders. Preferred Tools Technologies AWS: EC2, S3, VPC, Security Groups, RDS, Redshift, DocumentDB, MSK, Glue, Athena, CloudWatch Infrastructure as Code: Terraform, CloudFormation Scripted automation: Python, Bash Container orchestration: Docker, ECS or EKS Workflow orchestration: Apache Airflow, Dagster Streaming framework: Apache Kafka, Kinesis, Flink Other: Linux, Git, Security best practices (IAM, Security Groups, ACM) Education Bachelors/Masters degree in Computer Science, Data Science, or related field Relevant professional certifications in cloud platforms or data technologies Why Join Us Opportunity to work in a fast-growing audio and content platform. Exposure to multi-language marketing and global user base strategies. A collaborative work environment with a data-driven and innovative approach. Competitive salary and growth opportunities in marketing and growth strategy. Success Metrics Scalability: Ability to handle 1+ billion events/day with low latency and high resiliency. Cost-efficiency: Reduction in AWS operational costs by optimizing services, storage, and data transfer. Uptime/SLI: Achieve 99.9999% platform and pipeline uptimes with automated fallback mechanisms. Data delivery latency: Reduce event delivery latency to under 5 minutes for real-time processing. Security and compliance: Implement controls to pass PCI-DSS or SOC 2 audits with zero major findings. Developer productivity: Improve team delivery speed by self-service IaC modules and automated routines. About KUKU Founded in 2018, KUKU is India s leading storytelling platform, offering a vast digital library of audio stories, short courses, and microdramas. KUKU aims to be India s largest cultural exporter of stories, culture and history to the world with a firm belief in Create In India, Create For The World . We deliver immersive entertainment and education through our OTT platforms: Kuku FM, Guru, Kuku TV, and more. With a mission to provide high-quality, personalized stories across genres from entertainment across multiple formats and languages, KUKU continues to push boundaries and redefine India s entertainment industry. Website: www.kukufm.com Android App: Google Play iOS App: App Store LinkedIn: KUKU Ready to make an impactApply now
Posted 5 days ago
8.0 - 13.0 years
8 - 12 Lacs
Hyderabad
Work from Office
We are seeking a seasoned Data Engineering Manager with 8+ years of experience to lead and grow our data engineering capabilities. This role demands strong hands-on expertise in Python, SQL, Spark , and advanced proficiency in AWS and Databricks . As a technical leader, you will be responsible for architecting and optimizing scalable data solutions that enable analytics, data science, and business intelligence across the organization. Key Responsibilities: Lead the design, development, and optimization of scalable and secure data pipelines using AWS services such as Glue, S3, Lambda, EMR , and Databricks Notebooks , Jobs, and Workflows. Oversee the development and maintenance of data lakes on AWS Databricks , ensuring performance and scalability. Build and manage robust ETL/ELT workflows using Python and SQL , handling both structured and semi-structured data. Implement distributed data processing solutions using Apache Spark / PySpark for large-scale data transformation. Collaborate with cross-functional teams including data scientists, analysts, and product managers to ensure data is accurate, accessible, and well-structured. Enforce best practices for data quality, governance, security , and compliance across the entire data ecosystem. Monitor system performance, troubleshoot issues, and drive continuous improvements in data infrastructure. Conduct code reviews, define coding standards, and promote engineering excellence across the team. Mentor and guide junior data engineers, fostering a culture of technical growth and innovation. Requirements 8+ years of experience in data engineering with proven leadership in managing data projects and teams. Expertise in Python, SQL, Spark (PySpark) ,
Posted 5 days ago
5.0 - 8.0 years
8 - 13 Lacs
Hyderabad
Work from Office
SnowFlake Data Engineering (SnowFlake, DBT & ADF) - Lead Programmer Analyst (Experience: 5 to 8 Years) We are looking for a highly self-motivated individual with SnowFlake Data Engineering (SnowFlake, DBT & ADF) - Lead Programmer Analyst: At least 5 years of experience in designing and developing Data Pipelines & Assets. Must have experience with at least one Columnar MPP Cloud data warehouse (Snowflake/Azure Synapse/Redshift) for at least 5 years. Experience in ETL tools like Azure Data factory, Fivetran / DBT for 4 years. Experience with Git and Azure DevOps. Experience in Agile, Jira, and Confluence. Solid understanding of programming SQL objects (procedures, triggers, views, functions) in SQL Server. Experience optimizing SQL queries a plus. Working Knowledge of Azure Architecture, Data Lake. Willingness to contribute to documentation (e.g., mapping, defect logs). Generate functional specs for code migration or ask right questions thereof. Hands on programmer with a thorough understand of performance tuning techniques. Handling large data volume transformations (order of 100 GBs monthly). Able to create solution / data flows to suit requirements. Produce timely documentation e.g., mapping, UTR, defect / KEDB logs etc. Self-starter & learner. Able to understand and probe for requirements. Tech experience expected. Primary: Snowflake, DBT (development & testing). Secondary: Python, ETL or any data processing tool. Nice to have - Domain experience in Healthcare. Should have good oral and written communication. Should be a good team player. Should be proactive and adaptive.
Posted 5 days ago
5.0 - 10.0 years
2 - 6 Lacs
Pune
Work from Office
Job Title: Support Specialist - Eagle Platform (Portfolio Management) Location: Riyadh, Saudi Arabia Type: Full-time / Contract Industry: Banking / Investment Management / FinTech Experience Required: 5+ years We are seeking a highly skilled Support Specialist with hands-on experience working on BNY Mellon s Eagle Investment Systems , particularly the Eagle STAR, PACE, and ACCESS modules used for portfolio accounting, data management, and performance reporting . The ideal candidate will have supported the platform in banking or asset management environments, preferably with experience at Bank of America , BNY Mellon , or institutions using Eagle for middle- and back-office operations . Key Responsibilities: Provide day-to-day technical and functional support for the Eagle Platform including STAR, PACE, and Performance modules Troubleshoot and resolve user issues related to portfolio accounting, performance calculation, and reporting Act as a liaison between business users and technical teams for change requests, data corrections, and custom reports Monitor batch jobs, data feeds (security, pricing, transaction data), and system interfaces Work closely with front-office, middle-office, and operations teams to ensure accurate data processing and reporting Manage SLA-driven incident resolution and maintain support documentation Support data migrations, upgrades, and new release rollouts of Eagle components Engage in root cause analysis and implement preventive measures Required Skills and Experience: 5+ years of experience in financial systems support, with a strong focus on Eagle Investment Systems Strong knowledge of portfolio management processes , NAV calculations , and financial instruments (equities, fixed income, derivatives) Prior work experience in Bank of America , BNY Mellon , or with asset managers using Eagle is highly preferred Proficient in SQL , ETL tools , and understanding of data architecture in financial environments Familiarity with upstream/downstream systems such as Bloomberg, Aladdin, or CRD is a plus Strong analytical skills and attention to detail Excellent communication skills in English (Arabic is a plus) Preferred Qualifications: Bachelor s degree in Computer Science, Finance, or related field ITIL Foundation or similar certification in service management Prior experience working in a banking or asset management firm in the GCC is a bonus
Posted 5 days ago
10.0 - 15.0 years
3 - 7 Lacs
Kolkata
Work from Office
Join our Team About this opportunity: We are seeking a highly skilled, hands-on AI Architect - GenAI to lead the design and implementation of production-grade, cloud-native AI and NLP solutions that drive business value and enhance decision-making processes. The ideal candidate will have a robust background in machine learning, generative AI, and the architecture of scalable production systems. As an AI Architect, you will play a key role in shaping the direction of advanced AI technologies and leading teams in the development of cutting-edge solutions. What you will do: Architect and design AI and NLP solutions to address complex business challenges and support strategic decision-making. Lead the design and development of scalable machine learning models and applications using Python, Spark, NoSQL databases, and other advanced technologies. Spearhead the integration of Generative AI techniques in production systems to deliver innovative solutions such as chatbots, automated document generation, and workflow optimization. Guide teams in conducting comprehensive data analysis and exploration to extract actionable insights from large datasets, ensuring these findings are communicated effectively to stakeholders. Collaborate with cross-functional teams, including software engineers and data engineers, to integrate AI models into production environments, ensuring scalability, reliability, and performance. Stay at the forefront of advancements in AI, NLP, and Generative AI, incorporating emerging methodologies into existing models and developing new algorithms to solve complex challenges. Provide thought leadership on best practices for AI model architecture, deployment, and continuous optimization. Ensure that AI solutions are built with scalability, reliability, and compliance in mind. The skills you bring: Minimum of 10+ years of experience in AI, machine learning, or a similar role, with a proven track record of delivering AI-driven solutions. Hands-on experience in designing and implementing end-to-end GenAI-based solutions, particularly in chatbots, document generation, workflow automation, and other generative use cases. Expertise in Python programming and extensive experience with AI frameworks and libraries such as TensorFlow, PyTorch, scikit-learn, and vector databases. Deep understanding and experience with distributed data processing using Spark. Proven experience in architecting, deploying, and optimizing machine learning models in production environments at scale. Expertise in working with open-source Generative AI models (e.g., GPT-4, Mistral, Code-Llama, StarCoder) and applying them to real-world use cases. Expertise in designing cloud-native architectures and microservices for AI/ML applications. Why join Ericsson? What happens once you apply? Primary country and city: India (IN) || Kolkata Req ID: 763161
Posted 5 days ago
3.0 - 5.0 years
6 - 10 Lacs
Bengaluru
Work from Office
About Tarento: Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions. Were proud to be recognized as a Great Place to Work , a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you ll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose. About the Role: We are seeking an experienced Talend ETL Developer with 3-5 years of hands-on experience to design, develop, and maintain robust ETL solutions. The ideal candidate should have strong technical skills, excellent communication abilities, and a good understanding of business data needs. Key Responsibilities: Design, develop, and deploy ETL jobs using Talend to integrate data from multiple sources into target systems. Optimize ETL processes for performance, reliability, and maintainability. Work with business analysts and stakeholders to gather data requirements and translate them into technical specifications. Perform data profiling, cleansing, and transformation to ensure high data quality. Monitor and troubleshoot ETL workflows and provide timely resolution of issues. Document data flows, mappings, and transformation logic. Collaborate with other developers, DBAs, and QA teams to deliver end-to-end solutions. Required Skills & Experience: 3-5 years of strong hands-on experience with Talend ETL tools. Proficiency in designing and developing complex ETL processes and data integration solutions. Good knowledge of relational databases (e.g., MySQL, Oracle, SQL Server) and writing complex SQL queries. Experience with data profiling, data quality checks, and error handling in ETL pipelines. Strong understanding of data warehousing concepts and best practices. Good communication skills to interact with business users and technical teams. Ability to translate business requirements into efficient data processing workflows. Good to Have: Experience with cloud data platforms (AWS, Azure, or GCP). Familiarity with scheduling tools and version control systems. Exposure to Agile delivery methods.
Posted 5 days ago
8.0 - 13.0 years
15 - 20 Lacs
Pune
Work from Office
We are seeking a highly skilled and experienced Data Engineering Architect to join our growing team. As a Data Engineering Architect, you will play a critical role in designing, building, and scaling Googles massive data infrastructure and platforms. You will be a technical leader and mentor, driving innovation and ensuring the highest standards of data quality, reliability, and performance. Responsibilities: Design and Architecture: Design and implement scalable, reliable, and efficient data pipelines and architectures for various Google products and services. Develop and maintain data models, schemas, and ontologies to support diverse data sources and use cases. Evaluate and recommend new and emerging data technologies and tools to improve Googles data infrastructure. Collaborate with product managers, engineers, and researchers to define data requirements and translate them into technical solutions. Data Processing and Pipelines: Build and optimize batch and real-time data pipelines using Google Cloud Platform (GCP) services such as Dataflow, Dataproc, Pub/Sub, and Cloud Functions. Develop and implement data quality checks and validation processes to ensure data accuracy and consistency. Design and implement data governance policies and procedures to ensure data security and compliance. Data Storage and Management: Design and implement scalable data storage solutions using GCP services such as BigQuery, Cloud Storage, and Spanner. Optimize data storage and retrieval for performance and cost-effectiveness. Implement data lifecycle management policies and procedures. Team Leadership and Mentorship: Provide technical leadership and guidance to data engineers and other team members. Mentor and coach junior engineers to develop their skills and expertise. Foster a culture of innovation and collaboration within the team. Qualifications: Bachelors or Masters degree in Computer Science, Engineering, or a related field. 8+ years of experience in data engineering or a related field. Strong understanding of data warehousing, data modeling, and ETL processes. Expertise in designing and implementing large-scale data pipelines and architectures. Proficiency in SQL and at least one programming language such as Python or Java. Experience with Google Cloud Platform (GCP) services such as BigQuery, Dataflow, Dataproc, Pub/Sub, and Cloud Storage. Experience with open-source data processing frameworks such as Hadoop, Spark, and Kafka. Excellent communication, interpersonal, and collaboration skills. Preferred Qualifications: Experience with data governance and data quality management. Experience with machine learning and data science. Experience with containerization and orchestration technologies such as Docker and Kubernetes.
Posted 5 days ago
0.0 - 2.0 years
2 - 3 Lacs
Hyderabad
Work from Office
Designation: Trainee Process Associate(Freshers) Locations: Hyderabad (Panjagutta) Interview Address: Maatrum Technologies, 3rd Floor, Goyaz Jewellers Building, Beside Kotak Bank, Panjagutta, Hyderabad 500082 Walk-in Date: 21st June 2025(Saturday) and 23rd June 2025 (Monday), We are not working on Sunday Roles & Responsibilities: Process data extracted via Maatrum Technologies online portal. Review scanned property-related documents provided by the bank to identify relevant details. Gather records from online sources related to the property. Manually extract data from scanned and online documents. Work using Maatrum Technologies proprietary online system. Adhere to company and bank policies while carrying out the above tasks. Meet the required turnaround time as per project standards. Candidate Requirements: Decent written and verbal communication skills in English. Willingness to work in rotational shifts , 6 days a week. Typing proficiency. Ability to read and write fluently in Telugu . Open to both male and female candidates. Qualification: Any graduate is eligible to apply. Maatrum Technologies - A Product-based Company specializing in Title Verification and Legal Verification for Real Estate Properties within India. We are a sister concern of the esteemed Dr. Agarwals Eye Hospital Group. About Maatrum Technologies: Maatrum is India's pioneering Online Real Estate Title Verification Company, empowered by cutting-edge technology. Established in April 2015 under the Companies Act 2013, we have made a mark in the industry by harnessing technology to procure real estate documents directly from government databases. Our team of seasoned real estate lawyers leverages our robust and proprietary technology platform to deliver accurate reports in record time.
Posted 5 days ago
4.0 - 7.0 years
10 - 15 Lacs
Bengaluru
Work from Office
Job Description Summary The Data Scientist will work in teams addressing statistical, machine learning and data understanding problems in a commercial technology and consultancy development environment. In this role, you will contribute to the development and deployment of modern machine learning, operational research, semantic analysis, and statistical methods for finding structure in large data sets. Job Description Site Overview Established in 2000, the John F. Welch Technology Center (JFWTC) in Bengaluru is GE Aerospaces multidisciplinary research and engineering center. Pushing the boundaries of innovation every day, engineers and scientists at JFWTC have contributed to hundreds of aviation patents, pioneering breakthroughs in engine technologies, advanced materials, and additive manufacturing. Role Overview: As a Data Scientist, you will be part of a data science or cross-disciplinary team on commercially-facing development projects, typically involving large, complex data sets. These teams typically include statisticians, computer scientists, software developers, engineers, product managers, and end users, working in concert with partners in GE business units. Potential application areas include remote monitoring and diagnostics across infrastructure and industrial sectors, financial portfolio risk assessment, and operations optimization. In this role, you will: Develop analytics within well-defined projects to address customer needs and opportunities. Work alongside software developers and software engineers to translate algorithms into commercially viable products and services. Work in technical teams in development, deployment, and application of applied analytics, predictive analytics, and prescriptive analytics. Perform exploratory and targeted data analyses using descriptive statistics and other methods. Work with data engineers on data quality assessment, data cleansing and data analytics Generate reports, annotated code, and other projects artifacts to document, archive, and communicate your work and outcomes. Share and discuss findings with team members. Required Qualifications: Bachelor's Degree in Computer Science or STEM Majors (Science, Technology, Engineering and Math) with basic experience. Desired Characteristics: - Expertise in one or more programming languages and analytic software tools (e.g., Python, R, SAS, SPSS). Strong understanding of machine learning algorithms, statistical methods, and data processing techniques. - Exceptional ability to analyze large, complex data sets and derive actionable insights. Proficiency in applying descriptive, predictive, and prescriptive analytics to solve real-world problems. - Demonstrated skill in data cleansing, data quality assessment, and data transformation. Experience working with big data technologies and tools (e.g., Hadoop, Spark, SQL). - Excellent communication skills, both written and verbal. Ability to convey complex technical concepts to non-technical stakeholders and collaborate effectively with cross-functional teams - Demonstrated commitment to continuous learning and staying up-to-date with the latest advancements in data science, machine learning, and related fields. Active participation in the data science community through conferences, publications, or contributions to open-source projects. - Ability to thrive in a fast-paced, dynamic environment and adapt to changing priorities and requirements. Flexibility to work on diverse projects across various domains. Preferred Qualifications: - Awareness of feature extraction and real-time analytics methods. - Understanding of analytic prototyping, scaling, and solutions integration. - Ability to work with large, complex data sets and derive meaningful insights. - Familiarity with machine learning techniques and their application in solving real-world problems. - Strong problem-solving skills and the ability to work independently and collaboratively in a team environment. - Excellent communication skills, with the ability to convey complex technical concepts to non-technical stakeholders. Domain Knowledge: Demonstrated awareness of industry and technology trends in data science Demonstrated awareness of customer and stakeholder management and business metrics Leadership: Demonstrated awareness of how to function in a team setting Demonstrated awareness of critical thinking and problem solving methods Demonstrated awareness of presentation skills Personal Attributes: Demonstrated awareness of how to leverage curiosity and creativity to drive business impact Humble: respectful, receptive, agile, eager to learn Transparent: shares critical information, speaks with candor, contributes constructively Focused: quick learner, strategically prioritizes work, committed Leadership ability: strong communicator, decision-maker, collaborative Problem solver: analytical-minded, challenges existing processes, critical thinker Whether we are manufacturing components for our engines, driving innovation in fuel and noise reduction, or unlocking new opportunities to grow and deliver more productivity, our GE Aerospace teams are dedicated and making a global impact. Join us and help move the aerospace industry forward . Additional Information Relocation Assistance Provided: No
Posted 5 days ago
5.0 - 7.0 years
0 Lacs
Chennai
Hybrid
Role & responsibilities Direct Responsibilities Understand business requirement from business analyst, users and should have analytical mind to understand existing process and purpose better solutions Work on TSD designs, development, testing, deployment, support Suggest and implement innovative approach. Should be adaptable to new technology or methodology Contributing Responsibilities Contribute towards knowledge sharing initiatives with other team members Contribute to documentation of solution and configurations of the models Technical & Behavioral Competencies Mandatory 3+ years of experience in Corporate and Institutional Banking IT, with a full understanding of the Corporate Banking and/or Securities Services activity. Good understanding of AML monitoring tools and data needed for AML detection models. Good understanding of Data Analysis and Data Mapping processes. Extensive experience in working with functional and technical teams, defining requirements (mainly technical specification), establishing technical strategies, and leading the full life cycle delivery of projects. Experience in Data-Warehouse architectural design providing efficient solutions in Compliance AML data domains. Experience in Python developments. Excellent communication skills with the ability to explain complex technical issues in a simple concise manner. Strong coordination and organizational skills. Multi-tasking capabilities Preferred candidate profile
Posted 5 days ago
4.0 - 9.0 years
3 - 7 Lacs
Hyderabad
Work from Office
1. Role: ML Engineer Must have: 4+ years of experience Strong Understanding in statistics. Knowledge and experience in statistical and data mining techniques: GLM/Regression, Random Forest, Boosting, Tree text mining, ensemble techniques etc. Strong skills in software prototyping and engineering with expertise in Python is required. Experience creating and using advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modelling, clustering, decision trees, neural networks, etc. Knowledge in understanding the Time-series, data pattern & Product data from the Manufacturing Process. Qualification & Skills: Preferred Degree in Mechanical Engineering, Computer Science, ECE, Statistics, Applied Math or related field. 4+ years of practical experience with ML Projects, data processing, database programming and data analytics Extensive background in data mining and statistical analysis Able to understand various data structures and common methods in data transformation. Excellent pattern recognition, predictive modelling skills Experience with Business Intelligence Tools like Power BI is an asset.
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The data processing job market in India is thriving with opportunities for job seekers in the field. With the growing demand for data-driven insights in various industries, the need for professionals skilled in data processing is on the rise. Whether you are a fresh graduate looking to start your career or an experienced professional looking to advance, there are ample opportunities in India for data processing roles.
These major cities in India are actively hiring for data processing roles, with a multitude of job opportunities available for job seekers.
The average salary range for data processing professionals in India varies based on experience and skill level. Entry-level positions can expect to earn between INR 3-6 lakh per annum, while experienced professionals can earn upwards of INR 10 lakh per annum.
A typical career path in data processing may include roles such as Data Analyst, Data Engineer, Data Scientist, and Data Architect. As professionals gain experience and expertise in the field, they may progress from Junior Data Analyst to Senior Data Analyst, and eventually to roles such as Data Scientist or Data Architect.
In addition to data processing skills, professionals in this field are often expected to have knowledge of programming languages such as Python, SQL, and R. Strong analytical and problem-solving skills are also essential for success in data processing roles.
As you explore opportunities in the data processing job market in India, remember to prepare thoroughly for interviews and showcase your skills and expertise confidently. With the right combination of skills and experience, you can embark on a successful career in data processing in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
17069 Jobs | Dublin
Wipro
9221 Jobs | Bengaluru
EY
7581 Jobs | London
Amazon
5941 Jobs | Seattle,WA
Uplers
5895 Jobs | Ahmedabad
Accenture in India
5813 Jobs | Dublin 2
Oracle
5703 Jobs | Redwood City
IBM
5669 Jobs | Armonk
Capgemini
3478 Jobs | Paris,France
Tata Consultancy Services
3259 Jobs | Thane