Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
haryana
On-site
You will be working at Paras Twin Tower, Gurgaon as a full-time employee for Falcon, a Series-A funded cloud-native, AI-first banking technology & processing platform. Falcon specializes in assisting banks, NBFCs, and PPIs to efficiently launch cutting-edge financial products like credit cards, credit lines on UPI, prepaid cards, fixed deposits, and loans. Since its inception in 2022, Falcon has processed over USD 1 billion in transactions, collaborated with 12 of India's top financial institutions, and generated revenue exceeding USD 15 million. The company is supported by prominent investors from Japan, the USA, and leading Indian ventures and banks. To gain more insights about Falcon, visit https://falconfs.com/. As an Intermediate Data Engineer with 5-7 years of experience, your responsibilities will include designing, developing, and maintaining scalable ETL processes using open source tools and data frameworks such as AWS Glue, AWS Athena, Redshift, Apache Kafka, Apache Spark, Apache Airflow, and Pentaho Data Integration (PDI). You will also be accountable for designing, creating, and managing data lakes and data warehouses on the AWS cloud, optimizing data pipeline architecture, formulating complex SQL queries for big data processing, collaborating with product and engineering teams to develop a platform for data modeling and machine learning operations, implementing data structures and algorithms to meet functional and non-functional requirements, ensuring data privacy and compliance, developing processes for monitoring and alerting on data quality issues, and staying updated with the latest data engineering trends by evaluating new open source technologies. To qualify for this role, you must have a Bachelor's or Master's degree in Computer Science or MCA from a reputable institute, at least 4 years of experience in a data engineering role, proficiency in Python, Java, or Scala for data processing (Python preferred), a deep understanding of SQL and analytical data warehouses, experience with database frameworks like PostgreSQL, MySQL, and MongoDB, knowledge of AWS technologies such as Lambda, Athena, Glue, and Redshift, experience implementing ETL or ELT best practices at scale, familiarity with data pipeline tools like Airflow, Luigi, Azkaban, dbt, proficiency with Git for version control, familiarity with Linux-based systems, cloud services (preferably AWS), strong analytical skills, and the ability to work in an agile and collaborative team environment. Preferred skills for this role include certification in any open source big data technologies, expertise in Apache Hadoop, Apache Hive, and other open source big data technologies, familiarity with data visualization tools like Apache Superset, Grafana, Tableau, experience in CI/CD processes, and knowledge of containerization technologies like Docker or Kubernetes. If you are someone with these skills and experience, we encourage you to explore this opportunity further. Please note that this job description is for an Intermediate Data Engineer role with key responsibilities and qualifications outlined.,
Posted 1 day ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
You are being offered a unique opportunity to join the Mastercard Economics Institute (MEI) as an Associate Economics Analyst in the Global Growth and Operations team. In this role, you will report to the Director, Growth and Operations and be responsible for blending advanced economic research with strong programming and data visualization skills. As an Associate Economics Analyst at MEI, you will play a crucial role in supporting client and stakeholder engagements across the institute. You will collaborate with a diverse team of economists, econometricians, developers, visualization experts, and industry partners. Your responsibilities will include developing and testing hypotheses at the intersection of economics, retail, and commerce, managing small project streams, and delivering impactful results. You will be tasked with identifying creative analyses and developing proprietary diagnostic indices using large and complex datasets. Your role will involve generating insights, synthesizing analyses into impactful storylines and interactive visuals, as well as assisting in writing reports and client presentations. Furthermore, you will have the opportunity to enhance existing products, develop new economic solutions, and contribute to creating thought leadership and intellectual capital. To excel in this role, you should possess a Bachelor's degree in Economics (preferred), Statistics, Mathematics, or a related field. Proficiency in working with relational databases, writing SQL queries, and expertise in large-scale data processing frameworks and tools such as Hadoop, Apache Spark, Apache Hive, and Apache Impala is essential. Additionally, you should be skilled in programming languages like R or Python, with experience in data processing packages. Your ability to create data visualizations to communicate complex economic insights to diverse audiences using tools like Tableau or Power BI will be critical. Experience in machine learning, econometric and statistical techniques, and strong problem-solving skills are desirable. Excellent written and verbal communication skills, organizational abilities, and the capacity to prioritize work across multiple projects are important qualities for this role. If you are a collaborative team player, passionate about data, technology, and creating impactful economic insights, and meet the above qualifications, this position at Mastercard's Economics Institute may be the perfect fit for you.,
Posted 2 days ago
7.0 - 12.0 years
17 - 32 Lacs
hyderabad, chennai, bengaluru
Work from Office
Job Title: Senior Data Engineer Location: Pan India Experience: 7+ Years Joining: Immediate/Short Notice Preferred Job Summary: We are looking for an experienced Senior Data Engineer to design, develop, and optimize scalable data solutions across Enterprise Data Lake (EDL) and hybrid cloud platforms. The role involves data architecture, pipeline orchestration, metadata governance, and building reusable data products aligned with business goals. Key Responsibilities: Design & implement scalable data pipelines (Spark, Hive, Kafka, Bronze-Silver-Gold architecture). Work on data architecture, modelling, and orchestration for large-scale systems. Implement metadata governance, lineage, and business glossary using Apache Atlas. Support DataOps/MLOps best practices and mentor teams. Integrate data across structured & unstructured sources (ODS, CRM, NoSQL). Required Skills: Strong hands-on experience with Apache Hive, HBase, Kafka, Spark, Elasticsearch . Expertise in data architecture, modelling, orchestration, and DataOps . Familiarity with Data Mesh, Data Product development, and hybrid cloud (AWS/Azure/GCP) . Knowledge of metadata governance, ETL/ELT, NoSQL data models . Strong problem-solving and communication skills.
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
You are a Spark, Big Data - ETL Tech Lead for Commercial Cards Global Data Repository development team at Citigroup. In this role, you will collaborate with the Development Project Manager, development, testing, and production support teams, and various departments within Citigroup to ensure the success of TTS platforms. Your exceptional communication skills across technology and business realms will be crucial as you play a visible role as a technical lead in building scalable, enterprise-level global applications. Your responsibilities will include leading the design and implementation of large-scale data processing pipelines using Apache Spark on the BigData Hadoop Platform. You will develop and optimize Spark applications for performance and scalability, integrate data from various sources, and provide technical leadership for multiple large-scale global software solutions. Additionally, you will work on building relationships with senior business leaders, mentoring junior developers, and staying abreast of the latest trends in big data and cloud computing. Key challenges you may face in this role include managing time and changing priorities in a dynamic environment, providing quick solutions to software issues, and quickly grasping key concepts. Your qualifications should include a Bachelor's or Master's degree in computer science or Information Technology, a minimum of 10 years of experience in developing big data solutions using Apache Spark, and strong programming skills in Scala, Java, or Python. Desirable skills for this role may include experience in Java, Spring, ETL tools like Talend or Ab Initio, knowledge of Cloud technologies like AWS or GCP, experience in the financial industry, and familiarity with Agile methodology. Your ability to handle multiple projects, prioritize effectively, and promote teamwork will be essential for success in this role. This job description offers an overview of the responsibilities and qualifications required for the position of Spark, Big Data - ETL Tech Lead at Citigroup. Additional job-related duties may be assigned as needed. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Time Type: Full time,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As a Software Development Engineer at our company, you will have the opportunity to work in either Hyderabad, Telangana, India or Bengaluru, Karnataka, India. You should hold a Bachelor's degree in Computer Science, a related technical field, or have equivalent practical experience. With at least 8 years of experience in software development using general-purpose programming languages, you will be responsible for designing and building solutions for data migrations and developing software for Data Warehouse migration. Your expertise in programming languages such as Java, C/C++, Python, or Go will be crucial for this role. Additionally, experience in systems engineering with designing and building distributed processing systems using languages like Java, Python, or Scala will be highly beneficial. Knowledge of data warehouse design and developing enterprise data warehouse solutions is preferred. You should also have experience in data analytics and be able to leverage data systems to provide insights and collaborate in business selections. Familiarity with modern Open Source Technologies in the big data ecosystem, including frameworks like Apache Spark, Apache Hive, and Apache Iceberg, will be an advantage. In this role, you will work on solving problems by designing solutions that facilitate data migrations from petabytes to exabytes per day. Your work will accelerate customers" journey to Business Intelligence (BI) and Artificial Intelligence (AI) by delivering multiple products to external Google Cloud Platform (GCP) customers. Your responsibilities will include designing and developing software for Data Warehouse migration, collaborating with Product and Engineering Managers, providing technical direction and mentorship to the team, owning the end-to-end delivery of new features, and engaging with cross-functional partners to build and deliver integrated solutions. Join us in our mission to accelerate organizations" ability to digitally transform their business and industry by delivering enterprise-grade solutions that leverage cutting-edge technology and tools. As a trusted partner, we help customers around the world enable growth and solve their most critical business problems.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
The Applications Development Intermediate Programmer Analyst position is a role at an intermediate level where you will be responsible for participating in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your main objective will be to contribute to applications systems analysis and programming activities. Your responsibilities will include utilizing your knowledge of applications development procedures and concepts, as well as basic knowledge of other technical areas to identify and define necessary system enhancements. This will involve using script tools, analyzing/interpreting code, consulting with users, clients, and other technology groups on issues, recommending programming solutions, installing, and supporting customer exposure systems. You will also need to apply fundamental knowledge of programming languages for design specifications, analyze applications to identify vulnerabilities and security issues, conduct testing and debugging, and serve as an advisor or coach to new or lower-level analysts. In this role, you will need to identify problems, analyze information, make evaluative judgments to recommend and implement solutions, resolve issues by identifying and selecting solutions through the application of acquired technical experience, operate with a limited level of direct supervision, and exercise independence of judgment and autonomy. Additionally, you will act as a Subject Matter Expert to senior stakeholders and/or other team members. You should have 4-6 years of proven experience in developing and managing Big data solutions using Apache Spark and Scala. It is essential to have a strong hold on Spark-core, Spark-SQL, and Spark Streaming, along with strong programming skills in Scala, Java, or Python. Hands-on experience with technologies like Apache Hive, Apache Kafka, HBase, Couchbase, Sqoop, Flume, etc., proficiency in SQL, experience with relational databases (Oracle/PL-SQL), and familiarity with data warehousing concepts and ETL processes are required. You should also have experience in performance tuning of large technical solutions, knowledge of data modeling, data architecture, data integration techniques, and best practices for data security, privacy, and compliance. Furthermore, experience with JAVA, Web services, Microservices, SOA, Apache Spark, Hive, SQL, and the Hadoop ecosystem is necessary. You should have experience with developing frameworks and utility services, delivering high-quality software following continuous delivery, and using code quality tools. Experience in creating large-scale, multi-tiered, distributed applications with Hadoop and Spark, as well as knowledge of implementing different data storage solutions, is also expected. The ideal candidate will have a Bachelor's degree or University degree or equivalent experience. Please note that this job description provides a high-level overview of the work performed, and other job-related duties may be assigned as required.,
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
The Applications Development Intermediate Programmer Analyst position is an intermediate level role where you will be responsible for contributing to the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your main objective will be to assist in applications systems analysis and programming activities. You will utilize your knowledge of applications development procedures and concepts, along with basic knowledge of technical areas, to identify and define necessary system enhancements. This includes using script tools, analyzing code, and consulting with users, clients, and other technology groups to recommend programming solutions. Additionally, you will install and support customer exposure systems and apply fundamental knowledge of programming languages for design specifications. As an Intermediate Programmer Analyst, you will analyze applications to identify vulnerabilities and security issues, conduct testing and debugging, and serve as an advisor or coach to new or lower-level analysts. You will be responsible for identifying problems, analyzing information, and making evaluative judgments to recommend and implement solutions. Operating with a limited level of direct supervision, you will exercise independence of judgment and autonomy while acting as a subject matter expert to senior stakeholders and/or other team members. In this role, it is crucial to appropriately assess risk when making business decisions, with a focus on safeguarding Citigroup, its clients, and assets. This includes driving compliance with applicable laws, rules, and regulations, adhering to policies, applying sound ethical judgment, and escalating, managing, and reporting control issues with transparency. Qualifications: - 4-6 years of proven experience in developing and managing Big Data solutions using Apache Spark and Scala is required - Strong programming skills in Scala, Java, or Python - Hands-on experience with technologies like Apache Hive, Apache Kafka, HBase, Couchbase, Sqoop, Flume, etc. - Proficiency in SQL and experience with relational databases (Oracle/PL-SQL) - Experience in working on Kafka, JMS/MQ applications - Familiarity with data warehousing concepts and ETL processes - Knowledge of data modeling, data architecture, and data integration techniques - Experience with Java, Web services, XML, JavaScript, Microservices, SOA, etc. - Strong technical knowledge of Apache Spark, Hive, SQL, and the Hadoop ecosystem - Experience with developing frameworks and utility services, logging/monitoring, and high-quality software delivery - Experience creating large-scale, multi-tiered, distributed applications with Hadoop and Spark - Profound knowledge of implementing different data storage solutions such as RDBMS, Hive, HBase, Impala, and NoSQL databases Education: - Bachelor's degree or equivalent experience This job description provides a high-level overview of the responsibilities and qualifications for the Applications Development Intermediate Programmer Analyst position. Other job-related duties may be assigned as required.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a skilled professional in the field of Big Data and Analytics, you will be responsible for utilizing your expertise to drive impactful solutions for Standard Chartered Bank. Your role will involve leveraging your proficiency in various technologies and frameworks such as Hadoop, HDFS, HIVE, SPARK, Bash Scripting, SQL, and others. Your ability to handle raw and unstructured data while adhering to coding standards and software development life cycles will be crucial in ensuring the success of the projects you are involved in. In addition to your technical skills, you will also play a key role in Regulatory & Business Conduct by embodying the highest standards of ethics and compliance. Your responsibilities will include identifying and mitigating risks, as well as ensuring compliance with relevant laws and regulations. Collaborating effectively with FCSO development teams and FCSO Business stakeholders will be essential to achieving the desired outcomes. Your technical competencies in areas such as Hadoop, Apache Hive, PySpark, SQL, Azure DevOps, and Control M will be instrumental in fulfilling the responsibilities of this role. Your action-oriented approach, ability to collaborate, and customer focus will further contribute to your success in this position. Standard Chartered Bank is committed to fostering a diverse and inclusive work environment where each individual's unique talents are celebrated. By joining our team, you will have the opportunity to make a positive impact and drive commerce and prosperity through our valued behaviors. If you are passionate about utilizing your skills to create meaningful change and grow professionally, we invite you to be a part of our dynamic team at Standard Chartered Bank.,
Posted 1 month ago
8.0 - 12.0 years
7 - 16 Lacs
Hyderabad, Bengaluru
Work from Office
Role & responsibilities Primary Skills: Bigdata, Hadoop/ Apache Hive Must have Skills: Synapse, Data Lake Experience:8 to 12 years Location/ Shift/ Work Mode: Bangalore or Hyderabad 1 to 10pm Preferred candidate profile: Strong knowledge of distributed computing and big data frameworks: Hadoop, Spark, Hive, Presto, Kafka Hands-on experience with cloud platforms: AWS (S3, Glue, EMR, Athena, Redshift) or Azure (Synapse, ADLS, Data Factory) or GCP (Big Query, Dataflow, Pub/Sub) Deep understanding of data lake, Lakehouse, and warehouse design principles Proficiency in data modeling, schema design, partitioning strategies, and metadata management Experience with CI/CD, Terraform, Git, and orchestration tools like Airflow Familiarity with data catalog, lineage, and governance tools (e.g., Datahub, Purview, Collibra) Strong problem-solving, communication, and stakeholder management skills
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
71627 Jobs | Dublin
Wipro
26798 Jobs | Bengaluru
Accenture in India
22262 Jobs | Dublin 2
EY
20323 Jobs | London
Uplers
14624 Jobs | Ahmedabad
IBM
13848 Jobs | Armonk
Bajaj Finserv
13848 Jobs |
Accenture services Pvt Ltd
13066 Jobs |
Amazon
12516 Jobs | Seattle,WA
Capgemini
12337 Jobs | Paris,France