Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
7.0 years
0 Lacs
Mumbai, Maharashtra, India
Remote
About This Role About Aladdin Financial Engineering (AFE): Join a diverse and collaborative team of over 300 modelers and technologists in Aladdin Financial Engineering (AFE) within BlackRock Solutions, the business responsible for the research and development of Aladdin’s financial models. This group is also accountable for analytics production, enhancing the infrastructure platform and delivering analytics content to portfolio and risk management professionals (both within BlackRock and across the Aladdin client community). The models developed and supported by AFE span a wide array of financial products covering equities, fixed income, commodities, derivatives, and private markets. AFE provides investment insights that range from an analysis of cash flows on a single bond, to the overall financial risk associated with an entire portfolio, balance sheet, or enterprise. Role Description We are looking for a person to join the Advanced Data Analytics team with AFE Single Security. Advanced Data Analytics is a team of Quantitative Data and Product Specialists, focused on delivering Single Security Data Content, Governance and Product Solutions and Research Platform. The team leverages data, cloud, and emerging technologies in building an innovative data platform, with the focus on business and research use cases in the Single Security space. The team uses various statistical/mathematical methodologies to derive insights and generate content to help develop predictive models, clustering, and classification solutions and enable Governance. The team works on Mortgage, Structured & Credit Products. We are looking for a person to work with a specialized focus on Data & Model governance and expand to working on the derived data and analytics content in MBS, Structured Products and Credit space." Experience Experience on Scala Knowledge of ETL, data curation and analytical jobs using distributed computing framework with Spark Knowledge and Experience of working with large enterprise databases like Snowflake, Cassandra & Cloud managed services like Dataproc, Databricks Knowledge of financial instruments like Corporate Bonds, Derivatives etc. Knowledge of regression methodologies Aptitude for design and building tools for Data Governance Python knowledge is a plus Qualifications Bachelors/master's in computer science with a major in Math, Econ, or related field 7+ years of relevant experience Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Show more Show less
Posted 3 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Lead Data Engineer – C12 / Assistant Vice President (India) The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 8 to 12 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills An inclination to mentor; an ability to lead and deliver medium sized components independently T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Expertise around Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Data Governance: A strong grasp of principles and practice including data quality, security, privacy and compliance Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Experience of using a Job scheduler e.g., Autosys. Exposure to Business Intelligence tools e.g., Tableau, Power BI Certification on any one or more of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 3 days ago
6.0 - 12.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Description We are in need of a driven Scala Developer to join our dynamic team at GlobalLogic. In this role, you will have the groundbreaking opportunity to work on outstanding projects that innovate the future of technology. You will collaborate with our world-class engineers to deliver flawless solutions and compete in a crafty, innovative environment. Requirements Minimum 6-12 years of software development experience Scala Language Mastery Strong understanding of both functional and object-oriented programming paradigms. Deep knowledge of: Immutability, lazy evaluation Traits, case classes, companion objects Pattern matching Advanced type system: generics, type bounds, implicits, context bounds 🔹 Functional Programming (FP) Hands-on experience with: Pure functions, monads, functors, higher-kinded types FP libraries: Cats, Scalaz, or ZIO Understanding of effect systems and referential transparency 📦 Frameworks & Libraries 🔹 Backend / API Development RESTful API development using: Play Framework Akka HTTP Experience with GraphQL is a plus 🔹 Concurrency & Asynchronous Programming Deep understanding of: Futures, Promises Akka actors, Akka Streams ZIO or Cats Effect 🛠️ Build, Tooling & DevOps SBT for project building and dependency management Familiarity with Git, Docker, and Kubernetes CI/CD experience with Jenkins, GitHub Actions, or similar tools Comfortable with Linux command line and shell scripting 🗄️ Database & Data Systems Strong experience with: SQL databases: PostgreSQL, MySQL NoSQL databases: Cassandra, MongoDB Streaming/data pipelines: Kafka, Spark (with Scala) ORM / FP database libraries: Slick, Doobie 🧱 Architecture & System Design Microservices architecture design and deployment Event-driven architecture Familiarity with Domain-Driven Design (DDD) Designing for scalability, fault tolerance, and observability 🧪 Testing & Quality Experience with testing libraries: ScalaTest, Specs2, MUnit ScalaCheck for property-based testing Test-driven development (TDD) and behavior-driven development (BDD) 🌐 Cloud & Infrastructure (Desirable) Deploying Scala apps on: AWS (e.g., EC2, Lambda, ECS, RDS) GCP or Azure Experience with infrastructure-as-code (Terraform, CloudFormation) is a plus 🧠 Soft Skills & Leadership Mentorship: Ability to coach junior developers Code reviews: Ensure code quality and consistency Communication: Work cross-functionally with product managers, DevOps, QA Agile development: Experience with Scrum/Kanban Ownership: Capable of taking features from design to production ⚡ Optional (but Valuable) Scala.js / Scala Native experience Machine Learning with Scala (e.g., Spark MLlib) Exposure to Kotlin, Java, or Python Job responsibilities As a Scala Developer/ Big Data Engineer, you will: – Develop, test, and deploy high-quality Scala applications. – Implement functional and object-oriented programming paradigms. – Ensure code quality through immutability, lazy evaluation, and pattern matching. – Craft and build scalable systems using traits, case classes, and companion objects. – Collaborate with cross-functional teams to determine project requirements and deliver solutions successfully. – Troubleshoot and resolve complex technical issues. – Participate in code reviews to maintain our high standards of quality What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services. Show more Show less
Posted 3 days ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About The Team Rubrik is on a mission to secure the world’s data and our Information Technology Team is committed to supporting this mission. As part of the newly founded IT AI team, you’ll be pivotal in driving AI-powered transformation, enabling smarter automation, data-driven insights, and scalable solutions that empower Rubrik’s mission. About The Role We are seeking an experienced GenAI Engineer to join our Data Engineering team, with a focus on building AI Agents and workflows. The successful candidate will work on integrating data sources or building MCP clients/servers to support the development and deployment of LLM based Agents and bots. You will work closely with business teams and fellow data engineers enabling the Data Engineering team to leverage Gen AI tools for advanced data solutions. Collaborate with business teams and data engineers to empower the Data Engineering team's adoption of Gen AI tools for creating sophisticated data solutions. An experienced AI Data Engineer is needed to join our Data Engineering team, focusing on the development of AI Agents and workflows. The ideal candidate will be responsible for integrating data sources and building MCP clients/servers to facilitate the development and deployment of LLM-based Agents and bots. This role involves close collaboration with data scientists and engineers to ensure smooth data integration and flow, enabling the Data Engineering team to utilize GenAI tools for sophisticated data solutions. What You’ll Do Design and develop data integrations through MCP protocols or traditional data extractionmechanismsDesign and build data integrations utilizing MCP protocols or conventional data extraction methods. Leverage Snowflake Cortex, Gemini Agentspace or similar tools to build scalable and efficient data solutions for AI workloads, enabling the Data Engineering team to generate high-quality data products from unstructured and structured data Ensure data quality, integrity, and scalability for large-scale AI workloads, supporting the development of Gen AI models Collaborate with business teams, data engineers and application developers to deliver products helping streamline business processes, Work with business teams, data engineers, and application developers to create products that improve business processes or lead to top line growth or bottom line improvements. Integrate data pipelines with existing infrastructure, enabling seamless data flow and analytics Design and develop scalable data pipelines for GenAI model training and deployment. Utilize tools like Snowflake Cortex and Databricks LLM (Mosaic AI, RAG, Model Serving). Leverage platforms such as Snowflake Cortex and Gemini Agentspace. Create efficient data solutions for AI workloads. Enable the Data Engineering team to produce high-quality data products (unstructured and structured). Ensure data quality, integrity, and scalability for large AI workloads supporting GenAI model development. Collaborate with business teams, data engineers, and application developers. Deliver products that streamline business processes or drive revenue and efficiency. Integrate data pipelines with existing infrastructure. Ensure seamless data flow and analytics. Experience You’ll Need 1+ years of experience building AI Agents or leveraging Snowflake Cortex, Gemini Agentspace or similar open source tooling 3+ years of experience in data engineering, with a focus on AI/ML workloads 5+ years of experience working in Data Analytics either Snowflake or Databricks Strong programming skills in languages like Python, Java, or Scala Knowledge of data storage solutions (e.g., Snowflake, Databricks) and data APIs Experience with cloud configuration and data governance Strong problem-solving skills and ability to work in a fast-paced environment Experience with large language models (LLMs) like transformer-based models, and frameworks like LangChain or similar. Preferred Qualifications Building AI Agents and Agentic workflows Experience leveraging MCP, Agent2Agent Protocols Knowledge of generative models and their applications in data engineering Experience with data governance and security best practices for Gen AI workloads Experience with Agile development methodologies and collaboration tools (e.g., Jira, GitHub) Join Us in Securing the World's Data Rubrik (NYSE: RBRK) is on a mission to secure the world’s data. With Zero Trust Data Security™, we help organizations achieve business resilience against cyberattacks, malicious insiders, and operational disruptions. Rubrik Security Cloud, powered by machine learning, secures data across enterprise, cloud, and SaaS applications. We help organizations uphold data integrity, deliver data availability that withstands adverse conditions, continuously monitor data risks and threats, and restore businesses with their data when infrastructure is attacked. Linkedin | X (formerly Twitter) | Instagram | Rubrik.com Inclusion @ Rubrik At Rubrik, we are dedicated to fostering a culture where people from all backgrounds are valued, feel they belong, and believe they can succeed. Our commitment to inclusion is at the heart of our mission to secure the world’s data. Our goal is to hire and promote the best talent, regardless of background. We continually review our hiring practices to ensure fairness and strive to create an environment where every employee has equal access to opportunities for growth and excellence. We believe in empowering everyone to bring their authentic selves to work and achieve their fullest potential. Our inclusion strategy focuses on three core areas of our business and culture: Our Company: We are committed to building a merit-based organization that offers equal access to growth and success for all employees globally. Your potential is limitless here. Our Culture: We strive to create an inclusive atmosphere where individuals from all backgrounds feel a strong sense of belonging, can thrive, and do their best work. Your contributions help us innovate and break boundaries. Our Communities: We are dedicated to expanding our engagement with the communities we operate in, creating opportunities for underrepresented talent and driving greater innovation for our clients. Your impact extends beyond Rubrik, contributing to safer and stronger communities. Equal Opportunity Employer/Veterans/Disabled Rubrik is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or protected veteran status and will not be discriminated against on the basis of disability. Rubrik provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. In addition to federal law requirements, Rubrik complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. Federal law requires employers to provide reasonable accommodation to qualified individuals with disabilities. Please contact us at hr@rubrik.com if you require a reasonable accommodation to apply for a job or to perform your job. Examples of reasonable accommodation include making a change to the application process or work procedures, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. EEO IS THE LAW NOTIFICATION OF EMPLOYEE RIGHTS UNDER FEDERAL LABOR LAWS Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
We are looking for an immediate joiner and experienced Big Data Developer with a strong background in Kafka, PySpark, Python/Scala, Spark, SQL, and the Hadoop ecosystem. The ideal candidate should have over 5 years of experience and be ready to join immediately. This role requires hands-on expertise in big data technologies and the ability to design and implement robust data processing solutions. Responsibilities Design, develop, and maintain scalable data processing pipelines using Kafka, PySpark, Python/Scala, and Spark. Work extensively with the Kafka and Hadoop ecosystem, including HDFS, Hive, and other related technologies. Write efficient SQL queries for data extraction, transformation, and analysis. Implement and manage Kafka streams for real-time data processing. Utilize scheduling tools to automate data workflows and processes. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Ensure data quality and integrity by implementing robust data validation processes. Optimize existing data processes for performance and scalability. Requirements Experience with GCP. Knowledge of data warehousing concepts and best practices. Familiarity with machine learning and data analysis tools. Understanding of data governance and compliance standards. This job was posted by Arun Kumar K from krtrimaIQ Cognitive Solutions. Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Are you passionate about data? Does the prospect of dealing with massive volumes of data excite you? Do you want to build data engineering solutions at scale ? If yes, this opportunity will appeal to you. We are actively seeking a talented Data Engineer to join our dynamic reporting and analytics team. We are looking for a highly motivated individual who is passionate about data, demonstrates strong autonomy, and has deep expertise in the design, creation, and management of large and complex data pipelines. Key job responsibilities Design and implement data pipelines and ETL processes Create scalable data models and data architectures Drive best practices for data engineering, testing, and documentation Ensure data quality, consistency, and compliance standards are met Collaborate with cross-functional teams on data-driven solutions Contribute to technical strategy and architectural decisions Basic Qualifications 5+ years of data engineering experience Experience with SQL Experience with data modeling, warehousing and building ETL pipelines Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS Knowledge of distributed systems as it pertains to data storage and computing Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Knowledge of professional software engineering & best practices for full software development life cycle, including coding standards, software architectures, code reviews, source control management, continuous deployments, testing, and operational excellence Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ASSPL - Karnataka Job ID: A2971909 Show more Show less
Posted 3 days ago
6.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Us Acceldata is the market leader in Enterprise Data Observability. Founded in 2018, Silicon Valley-based Acceldata has developed the world's first Enterprise Data Observability Platform to help build and operate great data products. Enterprise Data Observability is at the intersection of today’s hottest and most crucial technologies such as AI, LLMs, Analytics, and DataOps. Acceldata provides mission-critical capabilities that deliver highly trusted and reliable data to power enterprise data products. Delivered as a SaaS product, Acceldata's solutions have been embraced by global customers, such as HPE, HSBC, Visa, Freddie Mac, Manulife, Workday, Oracle, PubMatic, PhonePe (Walmart), Hersheys, Dun & Bradstreet, and many more. Acceldata is a Series-C funded company whose investors include Insight Partners, March Capital, Lightspeed, Sorenson Ventures, Industry Ventures, and Emergent Ventures. About the Role: We are looking for an experienced Lead SDET for our ODP, specializing in ensuring the quality and performance of large-scale data systems. In this role, you will work closely with development and operations teams to design and execute comprehensive test strategies for Open Source Data Platform (ODP) , including Hadoop, Spark, Hive, Kafka, and other related technologies. You will focus on test automation, performance tuning, and identifying bottlenecks in distributed data systems. Your key responsibilities will include writing test plans, creating automated test scripts, and conducting functional, regression, and performance testing. You will be responsible for identifying and resolving defects, ensuring data integrity, and improving testing processes. Strong collaboration skills are essential as you will be interacting with cross-functional teams and driving quality initiatives. Your work will directly contribute to maintaining high-quality standards for big data solutions and enhancing their reliability at scale. You are a great fit for this role if you have Proven expertise in Quality Engineering, with a strong background in test automation, performance testing, and defect management across multiple data platforms. A proactive mindset to define and implement comprehensive test strategies that ensure the highest quality standards are met. Experience in working with both functional and non-functional testing, with a particular focus on automated test development. A collaborative team player with the ability to effectively work cross-functionally with development teams to resolve issues and deliver timely fixes. Strong communication skills with the ability to mentor junior engineers and share knowledge to improve testing practices across the team. A commitment to continuous improvement, with the ability to analyze testing processes and recommend enhancements to align with industry best practices. Ability to quickly learn new technologies What We Look For 6-10 years of hands-on experience in quality engineering and quality assurance, focusing on test automation, performance testing, and defect management across multiple data platforms Proficiency in programming languages such as Java, Python, or Scala for writing test scripts and automating test cases with hands-on experience in developing automated tests using other test automation frameworks, ensuring robust and scalable test suites. Proven ability to define and execute comprehensive test strategies, including writing test plans, test cases, and scripts for both functional and non-functional testing to ensure predictable delivery of high-quality products and solutions Experience with version control systems like Git and CI/CD tools such as Jenkins or GitLab CI to manage code changes and automate test execution within the development pipeline. Expertise in identifying, tracking, and resolving defects and issues, collaborating closely with developers and product teams to ensure timely fixes. Strong communication skills with the ability to work cross-functionally with development teams and mentor junior team members to improve testing practices and tools. Ability to analyze testing processes, recommend improvements and ensure the testing environment aligns with industry best practices, contributing to the overall quality of software. Acceldata is an equal-opportunity employer At Acceldata, we are committed to providing equal employment opportunities regardless of job history, disability, gender identity, religion, race, color, caste, marital/parental status, veteran status, or any other special status. We stand against the discrimination of employees and individuals and are proud to be an equitable workplace that welcomes individuals from all walks of life if they fit the designated roles and responsibilities. is all about working with some of the best minds in the industry and experiencing a culture that values an ‘out-of-the-box’ mindset. If you want to push boundaries, learn continuously, and grow to be the best version of yourself, Acceldata is the place to be! Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
India
On-site
About Oportun Oportun (Nasdaq: OPRT) is a mission-driven fintech that puts its 2.0 million members' financial goals within reach. With intelligent borrowing, savings, and budgeting capabilities, Oportun empowers members with the confidence to build a better financial future. Since inception, Oportun has provided more than $16.6 billion in responsible and affordable credit, saved its members more than $2.4 billion in interest and fees, and helped its members save an average of more than $1,800 annually. Oportun has been certified as a Community Development Financial Institution (CDFI) since 2009. WORKING AT OPORTUN Working at Oportun means enjoying a differentiated experience of being part of a team that fosters a diverse, equitable and inclusive culture where we all feel a sense of belonging and are encouraged to share our perspectives. This inclusive culture is directly connected to our organization's performance and ability to fulfill our mission of delivering affordable credit to those left out of the financial mainstream. We celebrate and nurture our inclusive culture through our employee resource groups. Position Overview As a Sr. Data Engineer at Oportun, you will be a key member of our team, responsible for designing, developing, and maintaining sophisticated software / data platforms in achieving the charter of the engineering group. Your mastery of a technical domain enables you to take up business problems and solve them with a technical solution. With your depth of expertise and leadership abilities, you will actively contribute to architectural decisions, mentor junior engineers, and collaborate closely with cross-functional teams to deliver high-quality, scalable software solutions that advance our impact in the market. This is a role where you will have the opportunity to take up responsibility in leading the technology effort – from technical requirements gathering to final successful delivery of the product - for large initiatives (cross-functional and multi-month-long projects). Responsibilities Data Architecture and Design: Lead the design and implementation of scalable, efficient, and robust data architectures to meet business needs and analytical requirements. Collaborate with stakeholders to understand data requirements, build subject matter expertise, and define optimal data models and structures. Data Pipeline Development And Optimization Design and develop data pipelines, ETL processes, and data integration solutions for ingesting, processing, and transforming large volumes of structured and unstructured data. Optimize data pipelines for performance, reliability, and scalability. Database Management And Optimization Oversee the management and maintenance of databases, data warehouses, and data lakes to ensure high performance, data integrity, and security. Implement and manage ETL processes for efficient data loading and retrieval. Data Quality And Governance Establish and enforce data quality standards, validation rules, and data governance practices to ensure data accuracy, consistency, and compliance with regulations. Drive initiatives to improve data quality and documentation of data assets. Mentorship And Leadership Provide technical leadership and mentorship to junior team members, assisting in their skill development and growth. Lead and participate in code reviews, ensuring best practices and high-quality code. Collaboration And Stakeholder Management Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand their data needs and deliver solutions that meet those needs. Communicate effectively with non-technical stakeholders to translate technical concepts into actionable insights and business value. Performance Monitoring And Optimization Implement monitoring systems and practices to track data pipeline performance, identify bottlenecks, and optimize for improved efficiency and scalability. Common Requirements You have a strong understanding of a business or system domain with sufficient knowledge & expertise around the appropriate metrics and trends. You collaborate closely with product managers, designers, and fellow engineers to understand business needs and translate them into effective solutions. You provide technical leadership and expertise, guiding the team in making sound architectural decisions and solving challenging technical problems. Your solutions anticipate scale, reliability, monitoring, integration, and extensibility. You conduct code reviews and provide constructive feedback to ensure code quality, performance, and maintainability. You mentor and coach junior engineers, fostering a culture of continuous learning, growth, and technical excellence within the team. You play a significant role in the ongoing evolution and refinement of current tools and applications used by the team, and drive adoption of new practices within your team. You take ownership of (customer) issues, including initial troubleshooting, identification of root cause and issue escalation or resolution, while maintaining the overall reliability and performance of our systems. You set the benchmark for responsiveness and ownership and overall accountability of engineering systems. You independently drive and lead multiple features, contribute to (a) large project(s) and lead smaller projects. You can orchestrate work that spans multiples engineers within your team and keep all relevant stakeholders informed. You support your lead/EM about your work and that of the team, that they need to share with the stakeholders, including escalation of issues Qualifications Bachelor's or Master's degree in Computer Science, Data Science, or a related field. 5+ years of experience in data engineering, with a focus on data architecture, ETL, and database management. Proficiency in programming languages like Python/PySpark and Java or Scala Expertise in big data technologies such as Hadoop, Spark, Kafka, etc. In-depth knowledge of SQL and experience with various database technologies (e.g., PostgreSQL, MariaDB, NoSQL databases). Experience and expertise in building complex end-to-end data pipelines. Experience with orchestration and designing job schedules using the CICD tools like Jenkins, Airflow or Databricks Ability to work in an Agile environment (Scrum, Lean, Kanban, etc) Ability to mentor junior team members. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and their data services (e.g., AWS Redshift, S3, Azure SQL Data Warehouse). Strong leadership, problem-solving, and decision-making skills. Excellent communication and collaboration abilities. Familiarity or certification in Databricks is a plus. We are proud to be an Equal Opportunity Employer and consider all qualified applicants for employment opportunities without regard to race, age, color, religion, gender, national origin, disability, sexual orientation, veteran status or any other category protected by the laws or regulations in the locations where we operate. California applicants can find a copy of Oportun's CCPA Notice here: https://oportun.com/privacy/california-privacy-notice/. We will never request personal identifiable information (bank, credit card, etc.) before you are hired. We do not charge you for pre-employment fees such as background checks, training, or equipment. If you think you have been a victim of fraud by someone posing as us, please report your experience to the FBI’s Internet Crime Complaint Center (IC3). Show more Show less
Posted 3 days ago
5.0 years
4 - 8 Lacs
Hyderābād
On-site
About Company: One of the cloud and data analytics company that empowers businesses to unlock insights and drive innovation through modern data solutions Role: Data Engineer Experience: 5 - 9 Years Location: Chennai & Hyderabad Notice Period: Immediate Joiner - 60 Days Roles and Responsibilities Bachelor's degree in Computer Science, Engineering, or a related field. 5+ years of experience in data engineering or a related role. Proficiency in programming languages such as Python, Java, or Scala, and scripting languages like SQL. Experience with big data technologies and ETL processes. Knowledge of cloud services (AWS, Azure, GCP) and their data-related services. Familiarity with data modeling, data warehousing, and building high-volume data pipelines. Understanding of distributed systems and microservices architecture. Experience with source control tools like Git, and CI/CD practices. Strong problem-solving skills and ability to work independently. Excellent communication and collaboration skills. Mandate Skillset - Python,Pyspark,SQL,Data bricks,AWS
Posted 3 days ago
3.0 - 6.0 years
6 - 9 Lacs
Hyderābād
On-site
Senior Analyst – Data Engineer - Deloitte Technology - Deloitte Support Services India Private Limited Do you thrive on developing creative and innovative insights to solve complex challenges? Want to work on next-generation, cutting-edge products and services that deliver outstanding value and that are global in vision and scope? Work with premier thought leaders in your field? Work for a world-class organization that provides an exceptional career experience with an inclusive and collaborative culture? Work you’ll do Seeking a candidate with extensive experience on designing, delivering and maintaining implementations of solutions in the cloud, specifically Microsoft Azure. This candidate should also possess strong cross-discipline communication skills, strong analytical aptitude with critical thinking, a solid understanding of how data would translate into reporting / dashboarding capabilities, and the tools and platforms that support them. Responsibilities Role Specific Designing a well-structured data model using methodologies (e.g., Kimball or Inmon) that accurately represents the business requirements, ensures data integrity and minimizes redundancies. Developing and implementing data pipelines to extract, transform, and load (ETL) data from various sources into Azure data services. This includes using Azure Data Factory, Azure Databricks, or other tools to orchestrate data workflows and data movement. Build, Test and Run of data assets tied to tasks and user stories from the Azure DevOps instance of Enterprise Data & Analytics. Bring a level of technical expertise of the Big Data space that contributes to the strategic roadmaps for Enterprise Data Architecture, Global Data Cloud Architecture, and Global Business Intelligence Architecture, as well contributes to the development of the broader Enterprise Data & Analytics Engineering community Actively participate in regularly scheduled contact calls to transparently review the status of in-flight projects, priorities of backlog projects, and review adoption of previous deliveries from Enterprise Data & Analytics with the Data Insights team. Handle break fixes and participate in a rotational on-call schedule. On-call includes monitoring of scheduled jobs and ETL pipelines. Actively participate in team meetings to transparently review the status of in-flight projects and their progress. Follow standard practice and frameworks on each project from development, to testing and then productionizing, each within the appropriate environment laid out by Data Architecture. Challenge’s self and others to make an impact that matters and help team connect their contributions with broader purpose. Sets expectations to the team, aligns the work based on the strengths and competencies, and challenges them to raise the bar while providing the support. Extensive knowledge of multiple technologies, tools, and processes to improve the design and architecture of the assigned applications. Knowledge Sharing / Documentation Contribute to, produce, and maintain processes, procedures, operational and architectural documentation. Change Control - ensure compliance with Processes and adherence to standards and documentation. Work with Deloitte Technology leadership and service teams in reviewing documentation and aligning KPIs to critical steps in our service operations. Active participation in ongoing training within BI space. The team At Deloitte, we’re all about collaboration. And nowhere is this more apparent than among our 2,000-strong internal services team. With our combined specialist skills, we provide all the essential support and advice our client-facing colleagues need, right across the firm. This enables them to focus all of their efforts on delivering the best service possible to their clients. Covering seven distinct areas; Human Resources, Clients & Industries, Finance & Legal, Practice Support Services, Quality & Risk Services, IT Services, and Workplace Services & Real Estate, together we live, breathe and deliver the Deloitte experience. Location: Hyderabad Work shift Timings: 11 AM to 8 PM Qualifications Bachelor of Engineering/ Bachelor of Technology 3-6 years of broad-based IT experience with technical knowledge of Microsoft SQL Server, Azure SQL Data Warehouse, Azure Data Lake Store, Azure Data Factory Demonstrated experience in Apache Framework (Spark, Scala, etc.) Well versed in SQL and comfortable in scripting using Python or similar language. First Month Critical Outcomes: Absorb strategic projects from the backlog and complete the related Azure SQL Data Warehouse Development work. Inspect existing run-state SQL Server databases and Azure SQL Data Warehouses and identify optimizations for potential development. Deliver new databases assigned as needed. Integration to on-call rotation (First 90 days). Contribute to legacy content and architecture migration to data lake (First 90 days). Delivery of first 2 data ingestion pipelines to include ingestion, QA and automation using Azure Big Data tools (First 90 days). Ability to document all work following standard documentation practices set forth by Data Governance (First 90 days). How you’ll grow At Deloitte, we’ve invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities—including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in exactly the same way. So, we provide a range of resources including live classrooms, team-based learning, and eLearning. DU: The Leadership Center in India, our state-of-the-art, world-class learning Center in the Hyderabad offices is an extension of the Deloitte University (DU) in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our people and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. #EAG-Technology Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 304653
Posted 3 days ago
8.0 years
8 - 9 Lacs
Gurgaon
On-site
You Lead the Way. We've Got Your Back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities, and each other. Here, you'll learn and grow as we help you create a career journey that's unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you'll be recognized for your contributions, leadership, and impact—every colleague has the opportunity to share in the company's success. Together, we'll win as a team, striving to uphold our company values and powerful backing promise to provide the world's best customer experience every day. And we'll do it with the utmost integrity, and in an environment where everyone is seen, heard and feels like they belong. Join Team Amex and let's lead the way together. American Express has embarked on an exciting transformation driven by an energetic new team of high performers. This is a great opportunity to join the Customer Marketing organization within American Express Technologies and become a driver of this exciting journey. We are looking for a highly skilled and experienced Senior Engineer with a history of building Bigdata, GCP Cloud, Python and Spark applications. The Senior Engineer will play a crucial role in designing, implementing, and optimizing data solutions to support our organization's data-driven initiatives. This role requires expertise in data engineering, strong problem-solving abilities, and a collaborative mindset to work effectively with various stakeholders. Joining the Enterprise Marketing team, this role will be focused on the delivery of innovative solutions to satisfy the needs of our business. As an agile team we work closely with our business partners to understand what they require, and we strive to continuously improve as a team. We pride ourselves on a culture of kindness and positivity, and a continuous focus on supporting colleague development to help you achieve your career goals. We lead with integrity, and we emphasize work/life balance for all of our teammates. How will you make an impact in this role? There are hundreds of opportunities to make your mark on technology and life at American Express. Here's just some of what you'll be doing: As a part of our team, you will be developing innovative, high quality, and robust operational engineering capabilities. Develop software in our technology stack which is constantly evolving but currently includes Big data, Spark, Python, Scala, GCP, Adobe Suit ( like Customer Journey Analytics ). Work with Business partners and stakeholders to understand functional requirements, architecture dependencies, and business capability roadmaps. Create technical solution designs to meet business requirements. Define best practices to be followed by team. Taking your place as a core member of an Agile team driving the latest development practices Identify and drive reengineering opportunities, and opportunities for adopting new technologies and methods. Suggest and recommend solution architecture to resolve business problems. Perform peer code review and participate in technical discussions with the team on the best solutions possible. As part of our diverse tech team, you can architect, code and ship software that makes us an essential part of our customers' digital lives. Here, you can work alongside talented engineers in an open, supportive, inclusive environment where your voice is valued, and you make your own decisions on what tech to use to solve challenging problems. American Express offers a range of opportunities to work with the latest technologies and encourages you to back the broader engineering community through open source. And because we understand the importance of keeping your skills fresh and relevant, we give you dedicated time to invest in your professional development. Find your place in technology of #TeamAmex. Minimum Qualifications : BS or MS degree in computer science, computer engineering, or other technical discipline, or equivalent work experience. 8+ years of hands-on software development experience with Big Data & Analytics solutions – Hadoop Hive, Spark, Scala, Hive, Python, shell scripting, GCP Cloud Big query, Big Table, Airflow. Working knowledge of Adobe suit like Adobe Experience Platform, Adobe Customer Journey Analytics Proficiency in SQL and database systems, with experience in designing and optimizing data models for performance and scalability. Design and development experience with Kafka, Real time ETL pipeline, API is desirable. Experience in designing, developing, and optimizing data pipelines for large-scale data processing, transformation, and analysis using Big Data and GCP technologies. Certifications in cloud platform (GCP Professional Data Engineer) is a plus. Understanding of distributed (multi-tiered) systems, data structures, algorithms & Design Patterns. Strong Object-Oriented Programming skills and design patterns. Experience with CICD pipelines, Automated test frameworks, and source code management tools (XLR, Jenkins, Git, Maven). Good knowledge and experience with configuration management tools like GitHub Ability to analyze complex data engineering problems, propose effective solutions, and implement them effectively. Looks proactively beyond the obvious for continuous improvement opportunities. Communicates effectively with product and cross functional team. Willingness to learn new technologies and leverage them to their optimal potential. Understanding of various SDLC methodologies, familiarity with Agile & scrum ceremonies. We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 3 days ago
8.0 years
30 - 38 Lacs
Gurgaon
Remote
Role: AWS Data Engineer Location: Gurugram Mode: Hybrid Type: Permanent Job Description: We are seeking a talented and motivated Data Engineer with requisite years of hands-on experience to join our growing data team. The ideal candidate will have experience working with large datasets, building data pipelines, and utilizing AWS public cloud services to support the design, development, and maintenance of scalable data architectures. This is an excellent opportunity for individuals who are passionate about data engineering and cloud technologies and want to make an impact in a dynamic and innovative environment. Key Responsibilities: Data Pipeline Development: Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes. Cloud Infrastructure Management: Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others. Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis. Performance Tuning & Optimization: Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows. Automation & Scripting: Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages. Collaboration & Documentation: Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly. Data Quality & Governance: Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met. Troubleshooting & Support: Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability. Qualifications: Essential Skills: Experience: 8+ years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets. AWS Experience: Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2. ETL Processes: Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation. Programming Languages: Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java). Data Warehousing: Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms. Data Modeling: Experience in designing data models, schema design, and data architecture for analytical systems. Version Control & CI/CD: Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Problem-Solving: Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline. Desirable Skills: Big Data Technologies: Experience with Hadoop, Spark, or other big data technologies. Containerization & Orchestration: Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies. Data Security: Experience implementing security best practices in the cloud and managing data privacy requirements. Data Streaming: Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka. Business Intelligence Tools: Experience with BI tools (Tableau, Quicksight) for visualization and reporting. Agile Methodology: Familiarity with Agile development practices and tools (Jira, Trello, etc.) Job Type: Permanent Pay: ₹3,000,000.00 - ₹3,800,000.00 per year Benefits: Work from home Schedule: Day shift Monday to Friday Experience: AWS: 4 years (Required) Data Engineering: 6 years (Required) Python: 3 years (Required) Pyspark/Spark: 3 years (Required) Work Location: In person
Posted 3 days ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description Are you passionate about data? Does the prospect of dealing with massive volumes of data excite you? Do you want to build data engineering solutions that process billions of records a day in a scalable fashion using AWS technologies? Do you want to create the next-generation tools for intuitive data access? If so, Amazon Finance Technology (FinTech) is for you! FinTech is seeking a Data Engineer to join the team that is shaping the future of the finance data platform. The team is committed to building the next generation big data platform that will be one of the world's largest finance data warehouse to support Amazon's rapidly growing and dynamic businesses, and use it to deliver the BI applications which will have an immediate influence on day-to-day decision making. Amazon has culture of data-driven decision-making, and demands data that is timely, accurate, and actionable. Our platform serves Amazon's finance, tax and accounting functions across the globe. As a Data Engineer, you should be an expert with data warehousing technical components (e.g. Data Modeling, ETL and Reporting), infrastructure (e.g. hardware and software) and their integration. You should have deep understanding of the architecture for enterprise level data warehouse solutions using multiple platforms (RDBMS, Columnar, Cloud). You should be an expert in the design, creation, management, and business use of large data-sets. You should have excellent business and communication skills to be able to work with business owners to develop and define key business questions, and to build data sets that answer those questions. The candidate is expected to be able to build efficient, flexible, extensible, and scalable ETL and reporting solutions. You should be enthusiastic about learning new technologies and be able to implement solutions using them to provide new functionality to the users or to scale the existing platform. Excellent written and verbal communication skills are required as the person will work very closely with diverse teams. Having strong analytical skills is a plus. Above all, you should be passionate about working with huge data sets and someone who loves to bring data-sets together to answer business questions and drive change. Our ideal candidate thrives in a fast-paced environment, relishes working with large transactional volumes and big data, enjoys the challenge of highly complex business contexts (that are typically being defined in real-time), and, above all, is a passionate about data and analytics. In this role you will be part of a team of engineers to create world's largest financial data warehouses and BI tools for Amazon's expanding global footprint. Key job responsibilities Design, implement, and support a platform providing secured access to large datasets. Interface with tax, finance and accounting customers, gathering requirements and delivering complete BI solutions. Model data and metadata to support ad-hoc and pre-built reporting. Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Tune application and query performance using profiling tools and SQL. Analyze and solve problems at their root, stepping back to understand the broader context. Learn and understand a broad range of Amazon’s data resources and know when, how, and which to use and which not to use. Keep up to date with advances in big data technologies and run pilots to design the data architecture to scale with the increased data volume using AWS. Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for datasets. Triage many possible courses of action in a high-ambiguity environment, making use of both quantitative analysis and business judgment. Basic Qualifications - 3+ years of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) - Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions - Experience with data visualization software (e.g., AWS QuickSight or Tableau) or open-source project - Bachelor's degree, or Master's degree Preferred Qualifications 5+ years of data engineering experience Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Dev Center India - Hyderabad Job ID: A2953275 Show more Show less
Posted 3 days ago
5.0 - 8.0 years
6 - 7 Lacs
Chennai
On-site
Responsibilities: Key roles and responsibilities: The DevOps Engineer will be responsible for designing, implementing, and maintaining CI/CD pipelines using Tekton, Harness, Jenkins, and uDeploy to streamline software delivery processes. This role involves managing configuration automation with Ansible, overseeing RHEL/Linux environments to optimize performance and security, and conducting static code analysis using SonarQube. The role includes knowledge on Apache Spark, Scala, Java, Java-Spark, Apache Kafka, Cloudera Ecosystem who will utilize knowledge to design process for the Olympus project, from concept to execution, including user research, wireframing, prototyping, and high-fidelity design. Collaborate with cross-functional teams including product managers, developers, and other designers to create intuitive and user-centered designs. Conduct tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establish and implement new or revised applications systems and programs to meet specific business needs or user areas Monitor and control all phases of development process and analysis, design, construction, testing, and implementation as well as provide user and operational support on applications to business users Utilize in-depth specialty knowledge of applications development to analyze complex problems/issues, provide evaluation of business process, system process, and industry standards, and make evaluative judgement Recommend and develop security measures in post implementation analysis of business usage to ensure successful system design and functionality Consult with users/clients and other technology groups on issues, recommend advanced programming solutions, and install and assist customer exposure systems Ensure essential procedures are followed and help define operating standards and processes Serve as advisor or coach to new or lower level analysts Has the ability to operate with a limited level of direct supervision. Can exercise independence of judgement and autonomy. Acts as SME to senior stakeholders and /or other team members. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 5-8 years of relevant experience Experience in systems analysis and programming of software applications Experience in managing and implementing successful projects Working knowledge of consulting/project management techniques/methods Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Education: Bachelor’s degree/University degree or equivalent experience Technical Skillset: Java /Spark/Scala/Kafka/ Tekton, Harness, Jenkins, and uDeploy This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. - Job Family Group: Technology - Job Family: Applications Development - Time Type: Full time - Most Relevant Skills Please see the requirements listed above. - Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. - Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 3 days ago
0.0 - 2.0 years
6 - 9 Lacs
Chennai
On-site
CDM Smith is seeking an Artificial Intelligence/Machine Learning Engineer to join our Digital Engineering Solutions team. This individual will be part of the Data Technology group within the Digital Engineering Solutions team, helping to drive strategic Architecture, Engineering and Construction (AEC) initiatives using cutting-edge data technologies and analytics to deliver actionable business insights and robust solutions for AEC professionals and client outcomes. The Data Technology group will lead the firm in AEC-focused Business Intelligence and data services by providing architectural guidance, technological vision, and solution development. The Data Technology group will specifically utilize advanced analytics, data science, and AI/ML to give our business and our products a competitive advantage. It includes understanding and managing the data, how it interconnects, and architecting & engineering data for self-serve BI and BA opportunities. This position is for a person who has demonstrated excellence in AI/ML engineering capabilities, experienced with data technology and processes, and enjoys framing a problem, shaping, and creating solutions, and helping to lead and champion implementation. As a member of the Digital Engineering Solutions team, the Data Technology group will also engage in research and development and provide guidance and oversight to the AEC practices at CDM Smith, engaging in new product research, testing, and the incubation of data technology-related ideas that arise from around the company. Key Responsibilities: Contributes to advanced analytics and uses artificial intelligence (AI) and machine learning (ML) solution techniques that address complex business challenges, particularly within the AEC domain. Apply state-of-the-art algorithms and techniques such as deep learning, NLP, computer vision, and time-series analysis for domain-specific use cases. Analyzes large datasets to identify patterns and trends. Participates in the testing and validation of AI model accuracy and reliability to ensure models perform in line with business requirements and expectations. Assist with AI/ML workflows optimization by implementing MLOps practices, including CI/CD pipelines, model retraining, and version control. Collaborate with Data Engineers, Data Scientists, and other stakeholders to design and implement end-to-end AI/ML solutions. Stay abreast of the latest developments and advancements, including new and emerging technologies & best practices and new tools & software applications and how they could impact CDM Smith. Assist with the development of documentation, standards, best practices, and workflows for data technology hardware/software in use across the business. Performs other duties as required. Skills and Abilities: Good understanding of the software development life cycle. Basic experience with building and deploying machine learning models using frameworks such as TensorFlow, PyTorch, or Scikit-learn. Basic experience with cloud-based AI/ML services, particularly in Microsoft Azure and Databricks. Basic experience with programming languages (ex: R, Python, Scala, etc.). Knowledge of MLOps practices, including automated pipelines, model versioning, monitoring, and lifecycle management. Knowledge of data privacy, security, and ethical AI principles, ensuring compliance with relevant standards. Excellent problem-solving and critical thinking skills to identify and address technical challenges effectively. Strong critical thinking skills to generate innovative solutions and improve business processes. Ability to effectively communicate complex technical concepts to both technical and non-technical audiences. Detail oriented with the ability to assist with executing highly complex or specialized projects. Minimum Qualifications Bachelor’s degree. 0 – 2 years of related experience. Equivalent additional related experience will be considered in lieu of a degree. Amount of Travel Required 0% Background Check and Drug Testing Information CDM Smith Inc. and its divisions and subsidiaries (hereafter collectively referred to as “CDM Smith”) reserves the right to require background checks including criminal, employment, education, licensure, etc. as well as credit and motor vehicle when applicable for certain positions. In addition, CDM Smith may conduct drug testing for designated positions. Background checks are conducted after an offer of employment has been made in the United States. The timing of when background checks will be conducted on candidates for positions outside the United States will vary based on country statutory law but in no case, will the background check precede an interview. CDM Smith will conduct interviews of qualified individuals prior to requesting a criminal background check, and no job application submitted prior to such interview shall inquire into an applicant's criminal history. If this position is subject to a background check for any convictions related to its responsibilities and requirements, employment will be contingent upon successful completion of a background investigation including criminal history. Criminal history will not automatically disqualify a candidate. In addition, during employment individuals may be required by CDM Smith or a CDM Smith client to successfully complete additional background checks, including motor vehicle record as well as drug testing. Agency Disclaimer All vendors must have a signed CDM Smith Placement Agreement from the CDM Smith Recruitment Center Manager to receive payment for your placement. Verbal or written commitments from any other member of the CDM Smith staff will not be considered binding terms. All unsolicited resumes sent to CDM Smith and any resume submitted to any employee outside of CDM Smith Recruiting Center Team (RCT) will be considered property of CDM Smith. CDM Smith will not be held liable to pay a placement fee. Business Unit COR Group COR Assignment Category Fulltime-Regular Employment Type Regular
Posted 3 days ago
0.0 - 2.0 years
0 Lacs
Chennai
On-site
CDM Smith is seeking a Data Scientist to join our Digital Engineering Solutions team. This individual will be part of the Data Technology group within the Digital Engineering Solutions team, helping to drive strategic Architecture, Engineering and Construction (AEC) initiatives using cutting-edge data technologies and analytics to deliver actionable business insights and robust solutions for AEC professionals and client outcomes. The Data Technology group will lead the firm in AEC-focused Business Intelligence and data services by providing architectural guidance, technological vision, and solution development. The Data Technology group will specifically utilize advanced analytics, data science, and AI/ML to give our business and our products a competitive advantage. It includes understanding and managing the data, how it interconnects, and architecting & engineering data for self-serve BI and BA opportunities. This position is for a person who has demonstrated excellence in data science capabilities, experienced with data technology and processes, and enjoys framing a problem, shaping and creating solutions, and helping to lead and champion implementation. As a member of the Digital Engineering Solutions team, the Data Technology group will also engage in research and development and provide guidance and oversight to the AEC practices at CDM Smith, engaging in new product research, testing, and the incubation of data technology-related ideas that arise from around the company. Key Responsibilities: Conduct rigorous exploratory data analysis (EDA) and apply statistical methods to uncover trends, relationships, patterns and identify data-driven insights. Assist in developing and deploying predictive, prescriptive, and advanced analytics models using statistical, optimization, and machine learning techniques to solve complex AEC business problems. Translate complex data science solutions into actionable business insights through effective communication and visualization. Collaborate with Data Engineers, AI/ML Engineers, and domain experts to frame problems and deliver actionable solutions. Ensure adherence to ethical standards in AI/ML and data science, maintaining transparency and accountability in all activities. Stay abreast of the latest developments and advancements, including new and emerging technologies & best practices and new tools & software applications and how they could impact CDM Smith. Assist with the development of documentation, standards, best practices, and workflows for data technology hardware/software in use across the business. Perform other duties as required. Skills and Abilities: Basic experience with statistical modeling, hypothesis testing, machine learning and advanced inferential and analytics techniques. Basic experience with programming languages (ex: R, Python, Scala, etc.). SQL skills for querying and managing structured and unstructured data. Knowledge of data visualization tools and methods to present complex findings effectively to diverse audiences. Excellent problem-solving and critical thinking skills to identify and address technical challenges effectively. Strong critical thinking skills to generate innovative solutions and improve business processes. Ability to effectively communicate complex technical concepts to both technical and non-technical audiences. Detail oriented with the ability to assist with executing highly complex or specialized projects. Minimum Qualifications Bachelor’s degree. 0 – 2 years of related experience. Equivalent additional directly related experience will be considered in lieu of a degree. Amount of Travel Required 0% Background Check and Drug Testing Information CDM Smith Inc. and its divisions and subsidiaries (hereafter collectively referred to as “CDM Smith”) reserves the right to require background checks including criminal, employment, education, licensure, etc. as well as credit and motor vehicle when applicable for certain positions. In addition, CDM Smith may conduct drug testing for designated positions. Background checks are conducted after an offer of employment has been made in the United States. The timing of when background checks will be conducted on candidates for positions outside the United States will vary based on country statutory law but in no case, will the background check precede an interview. CDM Smith will conduct interviews of qualified individuals prior to requesting a criminal background check, and no job application submitted prior to such interview shall inquire into an applicant's criminal history. If this position is subject to a background check for any convictions related to its responsibilities and requirements, employment will be contingent upon successful completion of a background investigation including criminal history. Criminal history will not automatically disqualify a candidate. In addition, during employment individuals may be required by CDM Smith or a CDM Smith client to successfully complete additional background checks, including motor vehicle record as well as drug testing. Agency Disclaimer All vendors must have a signed CDM Smith Placement Agreement from the CDM Smith Recruitment Center Manager to receive payment for your placement. Verbal or written commitments from any other member of the CDM Smith staff will not be considered binding terms. All unsolicited resumes sent to CDM Smith and any resume submitted to any employee outside of CDM Smith Recruiting Center Team (RCT) will be considered property of CDM Smith. CDM Smith will not be held liable to pay a placement fee. Business Unit COR Group COR Assignment Category Fulltime-Regular Employment Type Regular
Posted 3 days ago
5.0 years
4 - 8 Lacs
Ahmedabad
On-site
Unlock Your Potential With IGNEK Welcome to IGNEK, where we combine innovation and passion! We want our workplace to help you grow professionally and appreciate the special things each person brings. Come with us as we use advanced technology to make a positive difference. At IGNEK, we know our success comes from our team’s talent and hard work. Celebrate Successes Harness Your Skills Experience Growth Together Work, Learn, Celebrate Appreciate Unique Contributions Get Started Culture & Values Our Culture & values guide our actions and define our principles. Growth Learn and grow with us. We’re committed to providing opportunities for you to excel and expand your horizons. Transparency We are very transparent in terms of work, culture and communication to build trust and strong bonding among employees, teams and managers. People First Our success is all about our people. We care about your well-being and value diversity in our inclusive workplace. Be a team Team Work is our strength. Embrace a “Be a Team” mindset, valuing collective success over individual triumphs. Together, we can overcome challenges and reach new heights. Perks & Benefits Competitive flexibility and comprehensive benefits prioritize your well-being. Creative programs, professional development, and a vibrant work-life balance ensure your success is our success. 5 Days Working Festival Celebration Rewards & Benefits Certification Program Skills Improvement Referral Program Friendly Work Culture Training & Development Enterprise Projects Leave Carry Forward Yearly Trip Hybrid Work Fun Activities Indoor | Outdoor Flexible Timing Reliable Growth Team Lunch Stay Happy Opportunity Work Life balance What Makes You Different? BE Authentic Stay true to yourself, it’s what sets you apart BE Proactive Take charge of your work, don’t wait for things to happen BE A Learner Keep an open mind and never stop seeking knowledge BE Professional Approach every task with diligence and integrity BE Innovative Think outside the box and push boundaries BE Passionate Let your enthusiasm light the path to success Senior Data Engineer (AWS Expert) Technology: Data Engineer Job Type: Full Time Job Location: Ahmedabad Experience: 5+ Years Location: Ahmedabad (On-site) Shift Time: 2 PM – 11 PM IST About Us: IGNEK is a fast-growing custom software development company with over a decade of industry experience and a passionate team of 25+ experts. We specialize in crafting end-to-end digital solutions that empower businesses to scale efficiently and stay ahead in an ever-evolving digital world. At IGNEK, we believe in quality, innovation, and a people-first approach to solving real-world challenges through technology. We are looking for a highly skilled and experienced Data Engineer with deep expertise in AWS cloud technologies and strong hands-on experience in backend development, data pipelines, and system design. The ideal candidate will take ownership of delivering robust and scalable solutions while collaborating closely with cross-functional teams and the tech lead. Key Responsibilities: Lead and manage the end-to-end implementation of cloud-native data solutions on AWS. Design, build, and maintain scalable data pipelines (PySpark/Spark) and data lake architectures (Delta Lake 3.0 or similar). Migrate on-premises systems to modern, scalable AWS-based services. end-to-end solutions. Participate in code reviews, agile ceremonies, and documentation. Engineer robust relational databases using Postgres or Oracle with a strong understanding of procedural languages. Collaborate with the tech lead to understand business requirements and deliver practical, scalable solutions. Integrate newly developed features following defined SDLC standards using CI/CD pipelines. Develop orchestration and automation workflows using tools like Apache Airflow. Ensure all solutions comply with security best practices, performance benchmarks, and cloud architecture standards. Monitor, debug, and troubleshoot issues across multiple environments. Stay current with new AWS features, services, and trends to drive continuous platform improvement. Required Skills & Qualifications: 5+ years of professional experience in data engineering and backend development. Strong expertise in Python, Scala, and PySpark. Deep knowledge of AWS services: EC2, S3, Lambda, RDS, Kinesis, IAM, API Gateway, and others. Hands-on experience with Postgres or Oracle, and building relational data stores. Experience with Spark clusters, Delta Lake, Glue Catalogue, and large-scale data processing. Proven track record of end-to-end project delivery and third-party system integrations. Solid understanding of microservices, serverless architectures, and distributed computing. Skilled in Java, Bash scripting, and search tools like Elasticsearch. Proficient in using CI/CD tools (e.g., GitLab, GitHub, AWS CodePipeline). Experience working with Infrastructure as Code (Iac) using Terraform. Hands-on experience with Docker, containerization, and cloud-native deployments. Preferred Qualifications: AWS Certifications (e.g., AWS Certified Solutions Architect or similar). Exposure to Agile/Scrum project methodologies. Familiarity with Kubernetes, advanced networking, and cloud security practices. Experience managing or collaborating with onshore/offshore teams. Preferred Qualifications: Excellent communication and stakeholder management. Strong leadership and problem-solving abilities. Team player with a collaborative mindset. High ownership and accountability in delivering quality outcomes.
Posted 3 days ago
3.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
As a Data Engineer , you are required to: Design, build, and maintain data pipelines that efficiently process and transport data from various sources to storage systems or processing environments while ensuring data integrity, consistency, and accuracy across the entire data pipeline. Integrate data from different systems, often involving data cleaning, transformation (ETL), and validation. Design the structure of databases and data storage systems, including the design of schemas, tables, and relationships between datasets to enable efficient querying. Work closely with data scientists, analysts, and other stakeholders to understand their data needs and ensure that the data is structured in a way that makes it accessible and usable. Stay up-to-date with the latest trends and technologies in the data engineering space, such as new data storage solutions, processing frameworks, and cloud technologies. Evaluate and implement new tools to improve data engineering processes. Qualification : Bachelor's or Master's in Computer Science & Engineering, or equivalent. Professional Degree in Data Science, Engineering is desirable. Experience level : At least 3 - 5 years hands-on experience in Data Engineering, ETL. Desired Knowledge & Experience : Spark: Spark 3.x, RDD/DataFrames/SQL, Batch/Structured Streaming Knowing Spark internals: Catalyst/Tungsten/Photon Databricks: Workflows, SQL Warehouses/Endpoints, DLT, Pipelines, Unity, Autoloader IDE: IntelliJ/Pycharm, Git, Azure Devops, Github Copilot Test: pytest, Great Expectations CI/CD Yaml Azure Pipelines, Continuous Delivery, Acceptance Testing Big Data Design: Lakehouse/Medallion Architecture, Parquet/Delta, Partitioning, Distribution, Data Skew, Compaction Languages: Python/Functional Programming (FP) SQL: TSQL/Spark SQL/HiveQL Storage: Data Lake and Big Data Storage Design additionally it is helpful to know basics of: Data Pipelines: ADF/Synapse Pipelines/Oozie/Airflow Languages: Scala, Java NoSQL: Cosmos, Mongo, Cassandra Cubes: SSAS (ROLAP, HOLAP, MOLAP), AAS, Tabular Model SQL Server: TSQL, Stored Procedures Hadoop: HDInsight/MapReduce/HDFS/YARN/Oozie/Hive/HBase/Ambari/Ranger/Atlas/Kafka Data Catalog: Azure Purview, Apache Atlas, Informatica Required Soft skills & Other Capabilities : Great attention to detail and good analytical abilities. Good planning and organizational skills Collaborative approach to sharing ideas and finding solutions Ability to work independently and also in a global team environment. Show more Show less
Posted 3 days ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Work with business stakeholders and cross-functional SMEs to deeply understand business context and key business questions Create Proof of concepts (POCs) / Minimum Viable Products (MVPs), then guide them through to production deployment and operationalization of projects Influence machine learning strategy for Digital programs and projects Make solution recommendations that appropriately balance speed to market and analytical soundness Explore design options to assess efficiency and impact, develop approaches to improve robustness and rigor Develop analytical / modelling solutions using a variety of commercial and open-source tools (e.g., Python, R, TensorFlow) Formulate model-based solutions by combining machine learning algorithms with other techniques such as simulations Design, adapt, and visualize solutions based on evolving requirements and communicate them through presentations, scenarios, and stories Create algorithms to extract information from large, multiparametric data sets Deploy algorithms to production to identify actionable insights from large databases Compare results from various methodologies and recommend optimal techniques Design, adapt, and visualize solutions based on evolving requirements and communicate them through presentations, scenarios, and stories Develop and embed automated processes for predictive model validation, deployment, and implementation Work on multiple pillars of AI including cognitive engineering, conversational bots, and data science Ensure that solutions exhibit high levels of performance, security, scalability, maintainability, repeatability, appropriate reusability, and reliability upon deployment Lead discussions at peer review and use interpersonal skills to positively influence decision making Provide thought leadership and subject matter expertise in machine learning techniques, tools, and concepts; make impactful contributions to internal discussions on emerging practices Facilitate cross-geography sharing of new ideas, learnings, and best-practices Requirements Bachelor of Science or Bachelor of Engineering at a minimum. 4+ years of work experience as a Data Scientist A combination of business focus, strong analytical and problem-solving skills, and programming knowledge to be able to quickly cycle hypothesis through the discovery phase of a project Advanced skills with statistical/programming software (e.g., R, Python) and data querying languages (e.g., SQL, Hadoop/Hive, Scala) Good hands-on skills in both feature engineering and hyperparameter optimization Experience producing high-quality code, tests, documentation Experience with Microsoft Azure or AWS data management tools such as Azure Data factory, data lake, Azure ML, Synapse, Databricks Understanding of descriptive and exploratory statistics, predictive modelling, evaluation metrics, decision trees, machine learning algorithms, optimization & forecasting techniques, and / or deep learning methodologies Proficiency in statistical concepts and ML algorithms Good knowledge of Agile principles and process Ability to lead, manage, build, and deliver customer business results through data scientists or professional services team Ability to share ideas in a compelling manner, to clearly summarize and communicate data analysis assumptions and results Self-motivated and a proactive problem solver who can work independently and in teams Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Senior Software Engineer Department: IDP About Us HG Insights is the global leader in technology intelligence, delivering actionable AI driven insights through advanced data science and scalable big data solutions. Our Big Data Insights Platform processes billions of unstructured documents and powers a vast data lake, enabling enterprises to make strategic, data-driven decisions. Join our team to solve complex data challenges at scale and shape the future of B2B intelligence. What You’ll Do: Design, build, and optimize large-scale distributed data pipelines for processing billions of unstructured documents using Databricks, Apache Spark, and cloud-native big data tools Architect and scale enterprise-grade big-data systems, including data lakes, ETL/ELT workflows, and syndication platforms for customer-facing Insights-as-a-Service (InaaS) products. Collaborate with product teams to develop features across databases, backend services, and frontend UIs that expose actionable intelligence from complex datasets. Implement cutting-edge solutions for data ingestion, transformation, and analytics using Hadoop/Spark ecosystems, Elasticsearch, and cloud services (AWS EC2, S3, EMR). Drive system reliability through automation, CI/CD pipelines (Docker, Kubernetes, Terraform), and infrastructure-as-code practices. What You’ll Be Responsible For Leading the development of our Big Data Insights Platform, ensuring scalability, performance, and cost-efficiency across distributed systems. Mentoring engineers, conducting code reviews, and establishing best practices for Spark optimization, data modeling, and cluster resource management. Building & Troubleshooting complex data pipeline issues, including performance tuning of Spark jobs, query optimization, and data quality enforcement. Collaborating in agile workflows (daily stand-ups, sprint planning) to deliver features rapidly while maintaining system stability. Ensuring security and compliance across data workflows, including access controls, encryption, and governance policies. What You’ll Need BS/MS/Ph.D. in Computer Science or related field, with 5+ years of experience building production-grade big data systems. Expertise in Scala/Java for Spark development, including optimization of batch/streaming jobs and debugging distributed workflows. Proven track record with: Databricks, Hadoop/Spark ecosystems, and SQL/NoSQL databases (MySQL, Elasticsearch). Cloud platforms (AWS EC2, S3, EMR) and infrastructure-as-code tools (Terraform, Kubernetes). RESTful APIs, microservices architectures, and CI/CD automation37. Leadership experience as a technical lead, including mentoring engineers and driving architectural decisions. Strong understanding of agile practices, distributed computing principles, and data lake architectures. Airflow orchestration (DAGs, operators, sensors) and integration with Spark/Databricks 7+ years of designing, modeling and building big data pipelines in an enterprise work setting. Nice-to-Haves Experience with machine learning pipelines (Spark MLlib, Databricks ML) for predictive analytics. Knowledge of data governance frameworks and compliance standards (GDPR, CCPA). Contributions to open-source big data projects or published technical blogs/papers. DevOps proficiency in monitoring tools (Prometheus, Grafana) and serverless architectures. Show more Show less
Posted 3 days ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In This Role, Your Responsibilities May Include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Preferred Education Master's Degree Required Technical And Professional Expertise Experience in Big Data Technology like Hadoop, Apache Spark, Hive. Practical experience in Core Java (1.8 preferred) /Python/Scala. Having experience in AWS cloud services including S3, Redshift, EMR etc. Strong expertise in RDBMS and SQL. Good experience in Linux and shell scripting. Experience in Data Pipeline using Apache Airflow Preferred Technical And Professional Experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences Show more Show less
Posted 3 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Work with business stakeholders and cross-functional SMEs to deeply understand business context and key business questions Create Proof of concepts (POCs) / Minimum Viable Products (MVPs), then guide them through to production deployment and operationalization of projects Influence machine learning strategy for Digital programs and projects Make solution recommendations that appropriately balance speed to market and analytical soundness Explore design options to assess efficiency and impact, develop approaches to improve robustness and rigor Develop analytical / modelling solutions using a variety of commercial and open-source tools (e.g., Python, R, TensorFlow) Formulate model-based solutions by combining machine learning algorithms with other techniques such as simulations Design, adapt, and visualize solutions based on evolving requirements and communicate them through presentations, scenarios, and stories Create algorithms to extract information from large, multiparametric data sets Deploy algorithms to production to identify actionable insights from large databases Compare results from various methodologies and recommend optimal techniques Design, adapt, and visualize solutions based on evolving requirements and communicate them through presentations, scenarios, and stories Develop and embed automated processes for predictive model validation, deployment, and implementation Work on multiple pillars of AI including cognitive engineering, conversational bots, and data science Ensure that solutions exhibit high levels of performance, security, scalability, maintainability, repeatability, appropriate reusability, and reliability upon deployment Lead discussions at peer review and use interpersonal skills to positively influence decision making Provide thought leadership and subject matter expertise in machine learning techniques, tools, and concepts; make impactful contributions to internal discussions on emerging practices Facilitate cross-geography sharing of new ideas, learnings, and best-practices Requirements Bachelor of Science or Bachelor of Engineering at a minimum. 4+ years of work experience as a Data Scientist A combination of business focus, strong analytical and problem-solving skills, and programming knowledge to be able to quickly cycle hypothesis through the discovery phase of a project Advanced skills with statistical/programming software (e.g., R, Python) and data querying languages (e.g., SQL, Hadoop/Hive, Scala) Good hands-on skills in both feature engineering and hyperparameter optimization Experience producing high-quality code, tests, documentation Experience with Microsoft Azure or AWS data management tools such as Azure Data factory, data lake, Azure ML, Synapse, Databricks Understanding of descriptive and exploratory statistics, predictive modelling, evaluation metrics, decision trees, machine learning algorithms, optimization & forecasting techniques, and / or deep learning methodologies Proficiency in statistical concepts and ML algorithms Good knowledge of Agile principles and process Ability to lead, manage, build, and deliver customer business results through data scientists or professional services team Ability to share ideas in a compelling manner, to clearly summarize and communicate data analysis assumptions and results Self-motivated and a proactive problem solver who can work independently and in teams Show more Show less
Posted 3 days ago
4.0 years
0 Lacs
India
Remote
We’re looking for a skilled Big Data Engineer. Role Highlights: Position: Big Data Engineer Experience: 4+ years Location- Remote Work mode- WFH Notice Period: Immediate/15 days joiners Mandatory. Key Skills:- Big Data, AWS, CI/CD Pipelines, Scala , Python or Java or C++ Show more Show less
Posted 3 days ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role: Java Developer - Software Engineer Experience: 4-9 Years Location: Chennai (HYBRID) Interview: F2F Mandatory: Java Spring Boot Microservice -React Js -AWS Cloud- DevOps- Node(Added Advantage) Job Description: Overall 4+ years of experience in Java Development Projects 3+Years of development experience in development with React 2+Years Of experience in AWS Cloud, Devops. Microservices development using Spring Boot Technical StackCore Java, Java, J2EE, Spring, MongoDB, GKE, Terraform, GitHub, GCP Developer, Kubernetes, Scala, Kafka Technical ToolsConfluence/Jira/Bitbucket or Git, CI / CD (Maven, Git, Jenkins), Eclipse or IntelliJ IDEA Experience in event-driven architectures (CQRS and SAGA patterns). Experience in Design patterns Build Tools (Gulp, Webpack), Jenkins, Docker, Automation, Bash, Redis, Elasticsearch, Kibana Technical Stack (UI)JavaScript, React JS, CSS/SCSS, HTML5, Git+ Show more Show less
Posted 3 days ago
3.0 years
0 Lacs
Greater Chennai Area
On-site
Join us in bringing joy to customer experience. Five9 is a leading provider of cloud contact center software, bringing the power of cloud innovation to customers worldwide. Living our values everyday results in our team-first culture and enables us to innovate, grow, and thrive while enjoying the journey together. We celebrate diversity and foster an inclusive environment, empowering our employees to be their authentic selves. The Data Engineer will help design and implement a Google Cloud Platform (GCP) Data Lake, build scalable data pipelines, and ensure seamless access to data for business intelligence and data science tools. They will support a wide range of projects while collaborating closely with management teams and business leaders. The ideal candidate will have a strong understanding of data engineering principles, data warehousing concepts, and the ability to document technical knowledge into clear processes and procedures. This position is based out of one of the offices of our affiliate Acqueon Technologies in India, and will adopt the hybrid work arrangements of that location. You will be a member of the Acqueon team with responsibilities supporting Five9 products, collaborating with global teammates based primarily in the United States. Responsibilities Design, implement, and maintain a scalable Data Lake on GCP to centralize structured and unstructured data from various sources (databases, APIs, cloud storage). Utilize GCP services including BigQuery, Dataflow, Pub/Sub, and Cloud Storage to optimize and manage data workflows, ensuring scalability, performance, and security. Collaborate closely with data analytics and data science teams to understand data needs, ensuring data is properly prepared for consumption by various systems (e.g. DOMO, Looker, Databricks) Implement best practices for data quality, consistency, and governance across all data pipelines and systems, ensuring compliance with internal and external standards. Continuously monitor, test, and optimize data workflows to improve performance, cost efficiency, and reliability. Maintain comprehensive technical documentation of data pipelines, systems, and architecture for knowledge sharing and future development. Requirements Bachelor's degree in Computer Science, Data Engineering, Data Science, or a related quantitative field (e.g. Mathematics, Statistics, Engineering). 3+ years of experience using GCP Data Lake and Storage Services. Certifications in GCP are preferred (e.g. Professional Cloud Developer, Professional Cloud Database Engineer). Advanced proficiency with SQL, with experience in writing complex queries, optimizing for performance, and using SQL in large-scale data processing workflows. Proficiency in programming languages such as Python, Java, or Scala, with practical experience building data pipelines, automating data workflows, and integrating APIs for data ingestion. Five9 embraces diversity and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. The more inclusive we are, the better we are. Five9 is an equal opportunity employer. View our privacy policy, including our privacy notice to California residents here: https://www.five9.com/pt-pt/legal. Note: Five9 will never request that an applicant send money as a prerequisite for commencing employment with Five9. Show more Show less
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Scala is a popular programming language that is widely used in India, especially in the tech industry. Job seekers looking for opportunities in Scala can find a variety of roles across different cities in the country. In this article, we will dive into the Scala job market in India and provide valuable insights for job seekers.
These cities are known for their thriving tech ecosystem and have a high demand for Scala professionals.
The salary range for Scala professionals in India varies based on experience levels. Entry-level Scala developers can expect to earn around INR 6-8 lakhs per annum, while experienced professionals with 5+ years of experience can earn upwards of INR 15 lakhs per annum.
In the Scala job market, a typical career path may look like: - Junior Developer - Scala Developer - Senior Developer - Tech Lead
As professionals gain more experience and expertise in Scala, they can progress to higher roles with increased responsibilities.
In addition to Scala expertise, employers often look for candidates with the following skills: - Java - Spark - Akka - Play Framework - Functional programming concepts
Having a good understanding of these related skills can enhance a candidate's profile and increase their chances of landing a Scala job.
Here are 25 interview questions that you may encounter when applying for Scala roles:
As you explore Scala jobs in India, remember to showcase your expertise in Scala and related skills during interviews. Prepare well, stay confident, and you'll be on your way to a successful career in Scala. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2